Test Report: Docker_Linux_crio_arm64 21409

                    
                      2aa028e6c9ae4a79883616b371bbf57b9811dc19:2025-10-14:41906
                    
                

Test fail (42/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.29
32 TestAddons/serial/GCPAuth/FakeCredentials 8.41
35 TestAddons/parallel/Registry 15.13
36 TestAddons/parallel/RegistryCreds 0.55
37 TestAddons/parallel/Ingress 145
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 5.53
41 TestAddons/parallel/CSI 52.55
42 TestAddons/parallel/Headlamp 3.15
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 9.6
45 TestAddons/parallel/NvidiaDevicePlugin 5.31
46 TestAddons/parallel/Yakd 6.27
52 TestForceSystemdFlag 519.34
53 TestForceSystemdEnv 512.26
98 TestFunctional/parallel/ServiceCmdConnect 603.5
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.89
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
136 TestFunctional/parallel/ServiceCmd/Format 0.54
137 TestFunctional/parallel/ServiceCmd/URL 0.54
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.34
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 532.09
175 TestMultiControlPlane/serial/DeleteSecondaryNode 8.66
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.23
191 TestJSONOutput/pause/Command 2.44
197 TestJSONOutput/unpause/Command 2.11
281 TestPause/serial/Pause 7.26
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.56
305 TestStartStop/group/old-k8s-version/serial/Pause 7.8
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.46
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.14
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.41
327 TestStartStop/group/embed-certs/serial/Pause 7.53
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.71
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.79
341 TestStartStop/group/newest-cni/serial/Pause 6.88
348 TestStartStop/group/no-preload/serial/Pause 6.41
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable volcano --alsologtostderr -v=1: exit status 11 (288.320813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:01:43.689127  553674 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:01:43.689874  553674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:43.689892  553674 out.go:374] Setting ErrFile to fd 2...
	I1014 20:01:43.689898  553674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:43.690212  553674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:01:43.690515  553674 mustload.go:65] Loading cluster: addons-864166
	I1014 20:01:43.690927  553674 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:43.690946  553674 addons.go:606] checking whether the cluster is paused
	I1014 20:01:43.691051  553674 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:43.691072  553674 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:01:43.691535  553674 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:01:43.711602  553674 ssh_runner.go:195] Run: systemctl --version
	I1014 20:01:43.711675  553674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:01:43.730314  553674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:01:43.834760  553674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:01:43.834840  553674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:01:43.868747  553674 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:01:43.868768  553674 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:01:43.868772  553674 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:01:43.868776  553674 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:01:43.868780  553674 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:01:43.868784  553674 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:01:43.868792  553674 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:01:43.868795  553674 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:01:43.868798  553674 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:01:43.868804  553674 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:01:43.868807  553674 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:01:43.868810  553674 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:01:43.868813  553674 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:01:43.868816  553674 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:01:43.868819  553674 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:01:43.868824  553674 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:01:43.868827  553674 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:01:43.868830  553674 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:01:43.868834  553674 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:01:43.868836  553674 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:01:43.868841  553674 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:01:43.868843  553674 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:01:43.868846  553674 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:01:43.868849  553674 cri.go:89] found id: ""
	I1014 20:01:43.868900  553674 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:01:43.884140  553674 out.go:203] 
	W1014 20:01:43.887102  553674 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:01:43.887139  553674 out.go:285] * 
	* 
	W1014 20:01:43.894202  553674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:01:43.897377  553674 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-864166 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-864166 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [14edcf26-2792-4adf-8b7b-e4965d42ac66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [14edcf26-2792-4adf-8b7b-e4965d42ac66] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003670737s
addons_test.go:694: (dbg) Run:  kubectl --context addons-864166 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:694: (dbg) Non-zero exit: kubectl --context addons-864166 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (135.119643ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:696: printenv creds: exit status 1
--- FAIL: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.217476ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004408817s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005117365s
addons_test.go:392: (dbg) Run:  kubectl --context addons-864166 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-864166 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-864166 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.543570999s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 ip
2025/10/14 20:02:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable registry --alsologtostderr -v=1: exit status 11 (279.130248ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:10.427260  555031 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:10.428023  555031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:10.428063  555031 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:10.428084  555031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:10.428517  555031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:10.428884  555031 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:10.429332  555031 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:10.429369  555031 addons.go:606] checking whether the cluster is paused
	I1014 20:02:10.429537  555031 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:10.429591  555031 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:10.430182  555031 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:10.453384  555031 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:10.453436  555031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:10.473078  555031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:10.579306  555031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:10.579423  555031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:10.611972  555031 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:10.611991  555031 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:10.611996  555031 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:10.612000  555031 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:10.612008  555031 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:10.612012  555031 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:10.612015  555031 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:10.612019  555031 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:10.612022  555031 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:10.612028  555031 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:10.612032  555031 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:10.612035  555031 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:10.612039  555031 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:10.612043  555031 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:10.612046  555031 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:10.612053  555031 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:10.612056  555031 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:10.612060  555031 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:10.612063  555031 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:10.612066  555031 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:10.612070  555031 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:10.612076  555031 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:10.612079  555031 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:10.612082  555031 cri.go:89] found id: ""
	I1014 20:02:10.612164  555031 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:10.627317  555031 out.go:203] 
	W1014 20:02:10.630325  555031 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:10.630355  555031 out.go:285] * 
	* 
	W1014 20:02:10.637431  555031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:10.640358  555031 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.592805ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864166
addons_test.go:332: (dbg) Run:  kubectl --context addons-864166 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (266.480279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:51.554263  556202 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:51.555186  556202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:51.555232  556202 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:51.555253  556202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:51.555573  556202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:51.555913  556202 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:51.556391  556202 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:51.556438  556202 addons.go:606] checking whether the cluster is paused
	I1014 20:02:51.556571  556202 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:51.556612  556202 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:51.557087  556202 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:51.576510  556202 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:51.576570  556202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:51.594876  556202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:51.699323  556202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:51.699416  556202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:51.730050  556202 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:51.730083  556202 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:51.730089  556202 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:51.730093  556202 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:51.730096  556202 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:51.730100  556202 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:51.730103  556202 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:51.730106  556202 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:51.730110  556202 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:51.730117  556202 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:51.730121  556202 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:51.730124  556202 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:51.730139  556202 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:51.730143  556202 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:51.730147  556202 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:51.730156  556202 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:51.730164  556202 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:51.730170  556202 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:51.730173  556202 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:51.730176  556202 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:51.730181  556202 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:51.730184  556202 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:51.730187  556202 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:51.730190  556202 cri.go:89] found id: ""
	I1014 20:02:51.730242  556202 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:51.745414  556202 out.go:203] 
	W1014 20:02:51.748255  556202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:51.748274  556202 out.go:285] * 
	* 
	W1014 20:02:51.755598  556202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:51.758546  556202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-864166 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-864166 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-864166 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [92172102-5348-4c23-935b-a5e97ff54ce5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [92172102-5348-4c23-935b-a5e97ff54ce5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.011715991s
I1014 20:02:32.100027  546910 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.074890452s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-864166 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-864166
helpers_test.go:243: (dbg) docker inspect addons-864166:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90",
	        "Created": "2025-10-14T19:59:08.394875124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 548071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:59:08.46061074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/hostname",
	        "HostsPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/hosts",
	        "LogPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90-json.log",
	        "Name": "/addons-864166",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-864166:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-864166",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90",
	                "LowerDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-864166",
	                "Source": "/var/lib/docker/volumes/addons-864166/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-864166",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-864166",
	                "name.minikube.sigs.k8s.io": "addons-864166",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5a04316aaeb19d26e3e2d0bf6f4cd3f78ecd0d6fe86a3b2acf28a7fff5b609",
	            "SandboxKey": "/var/run/docker/netns/be5a04316aae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-864166": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:67:89:2e:6e:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3a58de1d4bdf03bb1ca046984c4ec6e75aca4aca92e4ea3848e0b272acbfbba",
	                    "EndpointID": "e96875bb542fc4d29234fbee842e81794c3f7af944116a2ac8105d4873e2201e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-864166",
	                        "7451099bfbc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-864166 -n addons-864166
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-864166 logs -n 25: (1.428466087s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-282779                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-282779 │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ start   │ --download-only -p binary-mirror-998704 --alsologtostderr --binary-mirror http://127.0.0.1:42403 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-998704   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ delete  │ -p binary-mirror-998704                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-998704   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p addons-864166                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ addons  │ disable dashboard -p addons-864166                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ start   │ -p addons-864166 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 20:01 UTC │
	│ addons  │ addons-864166 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-864166 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ addons  │ enable headlamp -p addons-864166 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-864166 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ ip      │ addons-864166 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │ 14 Oct 25 20:02 UTC │
	│ addons  │ addons-864166 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ addons  │ addons-864166 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ addons  │ addons-864166 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ ssh     │ addons-864166 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ addons  │ addons-864166 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ addons  │ addons-864166 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864166                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │ 14 Oct 25 20:02 UTC │
	│ addons  │ addons-864166 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │                     │
	│ ssh     │ addons-864166 ssh cat /opt/local-path-provisioner/pvc-3acf2ef9-374e-4fc8-a1d8-3c1f2c4c5867_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ addons  │ addons-864166 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ addons  │ addons-864166 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ addons  │ addons-864166 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ addons  │ addons-864166 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ ip      │ addons-864166 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:58:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:58:41.639027  547669 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:58:41.639137  547669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:41.639146  547669 out.go:374] Setting ErrFile to fd 2...
	I1014 19:58:41.639151  547669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:41.639412  547669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 19:58:41.639849  547669 out.go:368] Setting JSON to false
	I1014 19:58:41.640726  547669 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9672,"bootTime":1760462250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 19:58:41.640797  547669 start.go:141] virtualization:  
	I1014 19:58:41.644186  547669 out.go:179] * [addons-864166] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 19:58:41.648066  547669 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:58:41.648121  547669 notify.go:220] Checking for updates...
	I1014 19:58:41.653788  547669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:58:41.656654  547669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:58:41.659504  547669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 19:58:41.662424  547669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 19:58:41.665236  547669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:58:41.668299  547669 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:58:41.691761  547669 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 19:58:41.691890  547669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:41.755907  547669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:41.746778275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:41.756012  547669 docker.go:318] overlay module found
	I1014 19:58:41.759164  547669 out.go:179] * Using the docker driver based on user configuration
	I1014 19:58:41.761976  547669 start.go:305] selected driver: docker
	I1014 19:58:41.761996  547669 start.go:925] validating driver "docker" against <nil>
	I1014 19:58:41.762011  547669 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:58:41.762743  547669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:41.816842  547669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:41.808180497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:41.817000  547669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:58:41.817237  547669 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:58:41.820194  547669 out.go:179] * Using Docker driver with root privileges
	I1014 19:58:41.823068  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:58:41.823143  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:58:41.823161  547669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:58:41.823246  547669 start.go:349] cluster config:
	{Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1014 19:58:41.828122  547669 out.go:179] * Starting "addons-864166" primary control-plane node in "addons-864166" cluster
	I1014 19:58:41.830983  547669 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:58:41.833899  547669 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:58:41.836757  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:58:41.836813  547669 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 19:58:41.836827  547669 cache.go:58] Caching tarball of preloaded images
	I1014 19:58:41.836838  547669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:58:41.836925  547669 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 19:58:41.836935  547669 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:58:41.837272  547669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json ...
	I1014 19:58:41.837302  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json: {Name:mkd4c5f5a8f0cbd8c057a027d6233016271076f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:58:41.852403  547669 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:58:41.852551  547669 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:58:41.852574  547669 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1014 19:58:41.852582  547669 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1014 19:58:41.852590  547669 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1014 19:58:41.852596  547669 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1014 19:58:59.919684  547669 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1014 19:58:59.919722  547669 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:58:59.919751  547669 start.go:360] acquireMachinesLock for addons-864166: {Name:mk2ce089a8fc2998b0a1e4a216c7aeae941b6849 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:58:59.919867  547669 start.go:364] duration metric: took 94.942µs to acquireMachinesLock for "addons-864166"
	I1014 19:58:59.919900  547669 start.go:93] Provisioning new machine with config: &{Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:58:59.919989  547669 start.go:125] createHost starting for "" (driver="docker")
	I1014 19:58:59.923423  547669 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1014 19:58:59.923667  547669 start.go:159] libmachine.API.Create for "addons-864166" (driver="docker")
	I1014 19:58:59.923710  547669 client.go:168] LocalClient.Create starting
	I1014 19:58:59.923819  547669 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 19:59:01.300154  547669 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 19:59:01.593064  547669 cli_runner.go:164] Run: docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 19:59:01.609867  547669 cli_runner.go:211] docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 19:59:01.609965  547669 network_create.go:284] running [docker network inspect addons-864166] to gather additional debugging logs...
	I1014 19:59:01.609989  547669 cli_runner.go:164] Run: docker network inspect addons-864166
	W1014 19:59:01.625983  547669 cli_runner.go:211] docker network inspect addons-864166 returned with exit code 1
	I1014 19:59:01.626016  547669 network_create.go:287] error running [docker network inspect addons-864166]: docker network inspect addons-864166: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-864166 not found
	I1014 19:59:01.626046  547669 network_create.go:289] output of [docker network inspect addons-864166]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-864166 not found
	
	** /stderr **
	I1014 19:59:01.626145  547669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:59:01.641333  547669 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a03c10}
	I1014 19:59:01.641384  547669 network_create.go:124] attempt to create docker network addons-864166 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 19:59:01.641437  547669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-864166 addons-864166
	I1014 19:59:01.691182  547669 network_create.go:108] docker network addons-864166 192.168.49.0/24 created
	I1014 19:59:01.691236  547669 kic.go:121] calculated static IP "192.168.49.2" for the "addons-864166" container
	I1014 19:59:01.691308  547669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 19:59:01.706081  547669 cli_runner.go:164] Run: docker volume create addons-864166 --label name.minikube.sigs.k8s.io=addons-864166 --label created_by.minikube.sigs.k8s.io=true
	I1014 19:59:01.722403  547669 oci.go:103] Successfully created a docker volume addons-864166
	I1014 19:59:01.722494  547669 cli_runner.go:164] Run: docker run --rm --name addons-864166-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --entrypoint /usr/bin/test -v addons-864166:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 19:59:03.900569  547669 cli_runner.go:217] Completed: docker run --rm --name addons-864166-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --entrypoint /usr/bin/test -v addons-864166:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.178023903s)
	I1014 19:59:03.900599  547669 oci.go:107] Successfully prepared a docker volume addons-864166
	I1014 19:59:03.900627  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:59:03.900650  547669 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 19:59:03.900719  547669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 19:59:08.327862  547669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427089557s)
	I1014 19:59:08.327895  547669 kic.go:203] duration metric: took 4.427245677s to extract preloaded images to volume ...
	W1014 19:59:08.328027  547669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 19:59:08.328135  547669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 19:59:08.380523  547669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-864166 --name addons-864166 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-864166 --network addons-864166 --ip 192.168.49.2 --volume addons-864166:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 19:59:08.677398  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Running}}
	I1014 19:59:08.700964  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:08.722986  547669 cli_runner.go:164] Run: docker exec addons-864166 stat /var/lib/dpkg/alternatives/iptables
	I1014 19:59:08.775969  547669 oci.go:144] the created container "addons-864166" has a running status.
	I1014 19:59:08.775996  547669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa...
	I1014 19:59:09.019931  547669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 19:59:09.054172  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:09.075760  547669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 19:59:09.075786  547669 kic_runner.go:114] Args: [docker exec --privileged addons-864166 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 19:59:09.150803  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:09.191410  547669 machine.go:93] provisionDockerMachine start ...
	I1014 19:59:09.191527  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:09.211979  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:09.212305  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:09.212487  547669 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:59:09.213174  547669 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37518->127.0.0.1:33507: read: connection reset by peer
	I1014 19:59:12.355908  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864166
	
	I1014 19:59:12.355943  547669 ubuntu.go:182] provisioning hostname "addons-864166"
	I1014 19:59:12.356031  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:12.372774  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:12.373073  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:12.373088  547669 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864166 && echo "addons-864166" | sudo tee /etc/hostname
	I1014 19:59:12.525496  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864166
	
	I1014 19:59:12.525583  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:12.542457  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:12.542770  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:12.542786  547669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864166/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:59:12.688584  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:59:12.688611  547669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 19:59:12.688634  547669 ubuntu.go:190] setting up certificates
	I1014 19:59:12.688662  547669 provision.go:84] configureAuth start
	I1014 19:59:12.688739  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:12.708787  547669 provision.go:143] copyHostCerts
	I1014 19:59:12.708875  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 19:59:12.709011  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 19:59:12.709078  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 19:59:12.709142  547669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.addons-864166 san=[127.0.0.1 192.168.49.2 addons-864166 localhost minikube]
	I1014 19:59:13.053424  547669 provision.go:177] copyRemoteCerts
	I1014 19:59:13.053510  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:59:13.053556  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.072784  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.176030  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:59:13.193304  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 19:59:13.210925  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 19:59:13.228157  547669 provision.go:87] duration metric: took 539.461746ms to configureAuth
	I1014 19:59:13.228182  547669 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:59:13.228479  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:13.228596  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.245599  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:13.245911  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:13.245930  547669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:59:13.500936  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:59:13.500959  547669 machine.go:96] duration metric: took 4.309519213s to provisionDockerMachine
	I1014 19:59:13.500974  547669 client.go:171] duration metric: took 13.577251094s to LocalClient.Create
	I1014 19:59:13.500987  547669 start.go:167] duration metric: took 13.577321584s to libmachine.API.Create "addons-864166"
	I1014 19:59:13.500994  547669 start.go:293] postStartSetup for "addons-864166" (driver="docker")
	I1014 19:59:13.501004  547669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:59:13.501066  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:59:13.501108  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.524432  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.628667  547669 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:59:13.632079  547669 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:59:13.632156  547669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:59:13.632175  547669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 19:59:13.632258  547669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 19:59:13.632285  547669 start.go:296] duration metric: took 131.285349ms for postStartSetup
	I1014 19:59:13.632612  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:13.653249  547669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json ...
	I1014 19:59:13.653655  547669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:59:13.653717  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.670832  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.769283  547669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:59:13.773992  547669 start.go:128] duration metric: took 13.853987234s to createHost
	I1014 19:59:13.774071  547669 start.go:83] releasing machines lock for "addons-864166", held for 13.854187656s
	I1014 19:59:13.774168  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:13.790702  547669 ssh_runner.go:195] Run: cat /version.json
	I1014 19:59:13.790754  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.791067  547669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:59:13.791132  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.812461  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.834328  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.915870  547669 ssh_runner.go:195] Run: systemctl --version
	I1014 19:59:14.007076  547669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:59:14.045127  547669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:59:14.049555  547669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:59:14.049635  547669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:59:14.085636  547669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 19:59:14.085671  547669 start.go:495] detecting cgroup driver to use...
	I1014 19:59:14.085728  547669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 19:59:14.085808  547669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:59:14.103676  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:59:14.116279  547669 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:59:14.116421  547669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:59:14.134445  547669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:59:14.153402  547669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:59:14.269201  547669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:59:14.395235  547669 docker.go:234] disabling docker service ...
	I1014 19:59:14.395340  547669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:59:14.415427  547669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:59:14.428068  547669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:59:14.545170  547669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:59:14.669849  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:59:14.682688  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:59:14.696343  547669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:59:14.696411  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.704854  547669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 19:59:14.704920  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.713458  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.721575  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.730394  547669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:59:14.738141  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.746950  547669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.760124  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.768744  547669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:59:14.776449  547669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:59:14.783629  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:14.898903  547669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:59:15.040058  547669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:59:15.040183  547669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:59:15.045118  547669 start.go:563] Will wait 60s for crictl version
	I1014 19:59:15.045224  547669 ssh_runner.go:195] Run: which crictl
	I1014 19:59:15.049399  547669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:59:15.075648  547669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:59:15.075806  547669 ssh_runner.go:195] Run: crio --version
	I1014 19:59:15.105597  547669 ssh_runner.go:195] Run: crio --version
	I1014 19:59:15.143960  547669 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:59:15.146850  547669 cli_runner.go:164] Run: docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:59:15.163084  547669 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:59:15.166914  547669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:59:15.176550  547669 kubeadm.go:883] updating cluster {Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:59:15.176679  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:59:15.176743  547669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:59:15.213395  547669 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:59:15.213420  547669 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:59:15.213478  547669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:59:15.238770  547669 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:59:15.238794  547669 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:59:15.238802  547669 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 19:59:15.238886  547669 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-864166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:59:15.238971  547669 ssh_runner.go:195] Run: crio config
	I1014 19:59:15.300074  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:59:15.300138  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:59:15.300171  547669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:59:15.300217  547669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864166 NodeName:addons-864166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:59:15.300385  547669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:59:15.300472  547669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:59:15.308570  547669 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:59:15.308646  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:59:15.315898  547669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 19:59:15.328252  547669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:59:15.341630  547669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1014 19:59:15.354557  547669 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:59:15.358490  547669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:59:15.368048  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:15.484177  547669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:59:15.499729  547669 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166 for IP: 192.168.49.2
	I1014 19:59:15.499802  547669 certs.go:195] generating shared ca certs ...
	I1014 19:59:15.499832  547669 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.500024  547669 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 19:59:15.872884  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt ...
	I1014 19:59:15.872917  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt: {Name:mk9472c60fe4424db2314e7a3ea9aad0a426d75b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.873142  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key ...
	I1014 19:59:15.873158  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key: {Name:mkb78757b13d34b18d9982ea6de4fd9ce3851ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.873247  547669 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 19:59:16.723837  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt ...
	I1014 19:59:16.723872  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt: {Name:mkdc527181622d5081b41fd495a359446f4796c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.724071  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key ...
	I1014 19:59:16.724087  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key: {Name:mkf23a0a6ec0a92f1f62c58c46012f7c62f6dad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.724167  547669 certs.go:257] generating profile certs ...
	I1014 19:59:16.724228  547669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key
	I1014 19:59:16.724247  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt with IP's: []
	I1014 19:59:16.989230  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt ...
	I1014 19:59:16.989263  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: {Name:mk6c988c6bbb32f8bb4dffa2c3a7504ae056cad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.989447  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key ...
	I1014 19:59:16.989459  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key: {Name:mk5141325328a4872e465d6be680ceb5e65afb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.989567  547669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d
	I1014 19:59:16.989590  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 19:59:18.483070  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d ...
	I1014 19:59:18.483103  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d: {Name:mk522b09f50d7966504b93cd25c94d3e85aef632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.483293  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d ...
	I1014 19:59:18.483309  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d: {Name:mkc9c92cee1a5d5d37a42bba3c89d3e99be6c892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.483381  547669 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt
	I1014 19:59:18.483464  547669 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key
	I1014 19:59:18.483519  547669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key
	I1014 19:59:18.483539  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt with IP's: []
	I1014 19:59:18.725124  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt ...
	I1014 19:59:18.725158  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt: {Name:mk3d6a277197517bf303b2714e4d53b14bbdac31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.725364  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key ...
	I1014 19:59:18.725379  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key: {Name:mkdadd69d020076dbd43dd35a7e051a8522b21d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.725566  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 19:59:18.725608  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:59:18.725632  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:59:18.725660  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 19:59:18.726322  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:59:18.745209  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:59:18.763610  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:59:18.781371  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 19:59:18.797881  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 19:59:18.815543  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 19:59:18.832269  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:59:18.850689  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 19:59:18.867223  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:59:18.884365  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:59:18.896795  547669 ssh_runner.go:195] Run: openssl version
	I1014 19:59:18.902772  547669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:59:18.911286  547669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.914893  547669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.914973  547669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.955409  547669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:59:18.963541  547669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:59:18.966911  547669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 19:59:18.966958  547669 kubeadm.go:400] StartCluster: {Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:18.967046  547669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:59:18.967113  547669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:59:18.993530  547669 cri.go:89] found id: ""
	I1014 19:59:18.993653  547669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:59:19.002401  547669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:59:19.010550  547669 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:59:19.010638  547669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:59:19.018345  547669 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:59:19.018364  547669 kubeadm.go:157] found existing configuration files:
	
	I1014 19:59:19.018438  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 19:59:19.026194  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:59:19.026296  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:59:19.033625  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 19:59:19.042128  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:59:19.042193  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:59:19.049355  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 19:59:19.056798  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:59:19.056879  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:59:19.064261  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 19:59:19.071893  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:59:19.071974  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:59:19.079243  547669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:59:19.119286  547669 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:59:19.119352  547669 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:59:19.145863  547669 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:59:19.145945  547669 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 19:59:19.145987  547669 kubeadm.go:318] OS: Linux
	I1014 19:59:19.146040  547669 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:59:19.146095  547669 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 19:59:19.146149  547669 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:59:19.146203  547669 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:59:19.146258  547669 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:59:19.146346  547669 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:59:19.146399  547669 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:59:19.146455  547669 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:59:19.146508  547669 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 19:59:19.219512  547669 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:59:19.219632  547669 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:59:19.219731  547669 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:59:19.234905  547669 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:59:19.242087  547669 out.go:252]   - Generating certificates and keys ...
	I1014 19:59:19.242209  547669 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:59:19.242303  547669 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:59:19.474848  547669 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 19:59:20.769631  547669 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 19:59:21.726690  547669 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 19:59:22.117279  547669 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 19:59:23.468958  547669 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 19:59:23.469115  547669 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-864166 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:59:23.847938  547669 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 19:59:23.848166  547669 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-864166 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:59:25.035348  547669 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 19:59:25.493901  547669 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 19:59:25.967346  547669 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 19:59:25.967465  547669 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:59:26.925869  547669 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:59:27.413576  547669 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:59:27.931304  547669 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:59:29.315370  547669 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:59:29.503138  547669 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:59:29.503862  547669 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:59:29.507550  547669 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:59:29.511077  547669 out.go:252]   - Booting up control plane ...
	I1014 19:59:29.511204  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:59:29.511295  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:59:29.512185  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:59:29.527162  547669 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:59:29.527280  547669 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:59:29.534864  547669 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:59:29.535190  547669 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:59:29.535459  547669 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:59:29.677045  547669 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:59:29.677165  547669 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:59:30.674518  547669 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002141436s
	I1014 19:59:30.678318  547669 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:59:30.678419  547669 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 19:59:30.678514  547669 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:59:30.678596  547669 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:59:34.557897  547669 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.878613614s
	I1014 19:59:35.497595  547669 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.819245323s
	I1014 19:59:37.180238  547669 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501848s
	I1014 19:59:37.204739  547669 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 19:59:37.218127  547669 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 19:59:37.233348  547669 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 19:59:37.233694  547669 kubeadm.go:318] [mark-control-plane] Marking the node addons-864166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 19:59:37.246389  547669 kubeadm.go:318] [bootstrap-token] Using token: d5p9p5.n1skb9g2ibxk9cjj
	I1014 19:59:37.249498  547669 out.go:252]   - Configuring RBAC rules ...
	I1014 19:59:37.249647  547669 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 19:59:37.258531  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 19:59:37.269553  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 19:59:37.274012  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 19:59:37.278409  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 19:59:37.282792  547669 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 19:59:37.588009  547669 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 19:59:38.046415  547669 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 19:59:38.587689  547669 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 19:59:38.588852  547669 kubeadm.go:318] 
	I1014 19:59:38.588933  547669 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 19:59:38.588940  547669 kubeadm.go:318] 
	I1014 19:59:38.589016  547669 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 19:59:38.589021  547669 kubeadm.go:318] 
	I1014 19:59:38.589046  547669 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 19:59:38.589104  547669 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 19:59:38.589154  547669 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 19:59:38.589159  547669 kubeadm.go:318] 
	I1014 19:59:38.589212  547669 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 19:59:38.589217  547669 kubeadm.go:318] 
	I1014 19:59:38.589264  547669 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 19:59:38.589269  547669 kubeadm.go:318] 
	I1014 19:59:38.589321  547669 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 19:59:38.589396  547669 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 19:59:38.589463  547669 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 19:59:38.589468  547669 kubeadm.go:318] 
	I1014 19:59:38.589726  547669 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 19:59:38.589820  547669 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 19:59:38.589831  547669 kubeadm.go:318] 
	I1014 19:59:38.589920  547669 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d5p9p5.n1skb9g2ibxk9cjj \
	I1014 19:59:38.590034  547669 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 19:59:38.590061  547669 kubeadm.go:318] 	--control-plane 
	I1014 19:59:38.590070  547669 kubeadm.go:318] 
	I1014 19:59:38.590167  547669 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 19:59:38.590177  547669 kubeadm.go:318] 
	I1014 19:59:38.590269  547669 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d5p9p5.n1skb9g2ibxk9cjj \
	I1014 19:59:38.590381  547669 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 19:59:38.593580  547669 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 19:59:38.593891  547669 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 19:59:38.594027  547669 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:59:38.594045  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:59:38.594054  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:59:38.597271  547669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 19:59:38.600267  547669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 19:59:38.604296  547669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 19:59:38.604339  547669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 19:59:38.616972  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 19:59:38.906705  547669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 19:59:38.906844  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:38.906919  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864166 minikube.k8s.io/updated_at=2025_10_14T19_59_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=addons-864166 minikube.k8s.io/primary=true
	I1014 19:59:39.049350  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:39.049415  547669 ops.go:34] apiserver oom_adj: -16
	I1014 19:59:39.550017  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:40.050031  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:40.550325  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:41.049513  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:41.549510  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:42.049451  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:42.550080  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:43.050280  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:43.198826  547669 kubeadm.go:1113] duration metric: took 4.292022008s to wait for elevateKubeSystemPrivileges
	I1014 19:59:43.198852  547669 kubeadm.go:402] duration metric: took 24.23189744s to StartCluster
	I1014 19:59:43.198869  547669 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:43.198979  547669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:59:43.199380  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:43.199599  547669 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:59:43.199773  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 19:59:43.200018  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:43.200050  547669 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 19:59:43.200119  547669 addons.go:69] Setting yakd=true in profile "addons-864166"
	I1014 19:59:43.200136  547669 addons.go:238] Setting addon yakd=true in "addons-864166"
	I1014 19:59:43.200158  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.200667  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.200854  547669 addons.go:69] Setting inspektor-gadget=true in profile "addons-864166"
	I1014 19:59:43.200868  547669 addons.go:238] Setting addon inspektor-gadget=true in "addons-864166"
	I1014 19:59:43.200888  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.201273  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.201634  547669 addons.go:69] Setting metrics-server=true in profile "addons-864166"
	I1014 19:59:43.201663  547669 addons.go:238] Setting addon metrics-server=true in "addons-864166"
	I1014 19:59:43.201704  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.202112  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.202394  547669 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-864166"
	I1014 19:59:43.202410  547669 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-864166"
	I1014 19:59:43.202433  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.202834  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.208200  547669 addons.go:69] Setting cloud-spanner=true in profile "addons-864166"
	I1014 19:59:43.208234  547669 addons.go:238] Setting addon cloud-spanner=true in "addons-864166"
	I1014 19:59:43.208377  547669 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864166"
	I1014 19:59:43.208399  547669 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-864166"
	I1014 19:59:43.208430  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.208521  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.208897  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.210849  547669 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864166"
	I1014 19:59:43.210911  547669 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-864166"
	I1014 19:59:43.210937  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.211344  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.218540  547669 addons.go:69] Setting registry=true in profile "addons-864166"
	I1014 19:59:43.218577  547669 addons.go:238] Setting addon registry=true in "addons-864166"
	I1014 19:59:43.218624  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.219175  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.219536  547669 addons.go:69] Setting default-storageclass=true in profile "addons-864166"
	I1014 19:59:43.219556  547669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864166"
	I1014 19:59:43.219832  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.234893  547669 addons.go:69] Setting registry-creds=true in profile "addons-864166"
	I1014 19:59:43.234927  547669 addons.go:238] Setting addon registry-creds=true in "addons-864166"
	I1014 19:59:43.234961  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.235424  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.238040  547669 addons.go:69] Setting gcp-auth=true in profile "addons-864166"
	I1014 19:59:43.238078  547669 mustload.go:65] Loading cluster: addons-864166
	I1014 19:59:43.238301  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:43.238581  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.254803  547669 addons.go:69] Setting ingress=true in profile "addons-864166"
	I1014 19:59:43.254835  547669 addons.go:238] Setting addon ingress=true in "addons-864166"
	I1014 19:59:43.254882  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.255338  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.255622  547669 addons.go:69] Setting storage-provisioner=true in profile "addons-864166"
	I1014 19:59:43.255639  547669 addons.go:238] Setting addon storage-provisioner=true in "addons-864166"
	I1014 19:59:43.255670  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.256071  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.269239  547669 addons.go:69] Setting ingress-dns=true in profile "addons-864166"
	I1014 19:59:43.269273  547669 addons.go:238] Setting addon ingress-dns=true in "addons-864166"
	I1014 19:59:43.269329  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.269793  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.272419  547669 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864166"
	I1014 19:59:43.272452  547669 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864166"
	I1014 19:59:43.272775  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.295736  547669 out.go:179] * Verifying Kubernetes components...
	I1014 19:59:43.298439  547669 addons.go:69] Setting volcano=true in profile "addons-864166"
	I1014 19:59:43.298472  547669 addons.go:238] Setting addon volcano=true in "addons-864166"
	I1014 19:59:43.298517  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.298992  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.299173  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:43.339011  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.345713  547669 addons.go:69] Setting volumesnapshots=true in profile "addons-864166"
	I1014 19:59:43.345743  547669 addons.go:238] Setting addon volumesnapshots=true in "addons-864166"
	I1014 19:59:43.345788  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.356470  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.469560  547669 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 19:59:43.502770  547669 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1014 19:59:43.507176  547669 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 19:59:43.507207  547669 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1014 19:59:43.507289  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.507461  547669 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1014 19:59:43.510389  547669 out.go:179]   - Using image docker.io/registry:3.0.0
	I1014 19:59:43.512739  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 19:59:43.512800  547669 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 19:59:43.512918  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.540421  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 19:59:43.540566  547669 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1014 19:59:43.540616  547669 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 19:59:43.543353  547669 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:59:43.543418  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 19:59:43.543502  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.545132  547669 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:59:43.545152  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 19:59:43.545221  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.555465  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 19:59:43.555484  547669 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 19:59:43.555544  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.557518  547669 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-864166"
	I1014 19:59:43.557568  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.558026  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.583498  547669 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1014 19:59:43.588488  547669 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:59:43.588513  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1014 19:59:43.588580  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.609376  547669 addons.go:238] Setting addon default-storageclass=true in "addons-864166"
	I1014 19:59:43.609476  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.610125  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.610581  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 19:59:43.610855  547669 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1014 19:59:43.610986  547669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:59:43.611793  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.653661  547669 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1014 19:59:43.654689  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1014 19:59:43.611976  547669 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 19:59:43.623340  547669 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1014 19:59:43.655931  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1014 19:59:43.623414  547669 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:59:43.623536  547669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:59:43.656731  547669 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 19:59:43.668381  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.670423  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 19:59:43.673683  547669 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1014 19:59:43.681152  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 19:59:43.681422  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.681863  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1014 19:59:43.681958  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.689735  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.692460  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:59:43.700111  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.692470  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 19:59:43.701094  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.692771  547669 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 19:59:43.709785  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.716708  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.693670  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 19:59:43.699693  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:43.728780  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:43.731677  547669 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:59:43.731697  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 19:59:43.731758  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.735975  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 19:59:43.737914  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.742041  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 19:59:43.745032  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 19:59:43.747820  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 19:59:43.756412  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 19:59:43.759297  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 19:59:43.759324  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 19:59:43.759396  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.771006  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.779320  547669 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 19:59:43.786927  547669 out.go:179]   - Using image docker.io/busybox:stable
	I1014 19:59:43.795128  547669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:59:43.795153  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 19:59:43.795218  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.845802  547669 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:59:43.845823  547669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:59:43.845883  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.848641  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.854743  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 19:59:43.855497  547669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:59:43.899591  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.900122  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.902717  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.911427  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.922101  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.936788  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.942985  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.948536  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	W1014 19:59:43.949401  547669 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 19:59:43.949435  547669 retry.go:31] will retry after 334.62549ms: ssh: handshake failed: EOF
	I1014 19:59:43.978747  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	W1014 19:59:43.980658  547669 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 19:59:43.980680  547669 retry.go:31] will retry after 303.149886ms: ssh: handshake failed: EOF
	I1014 19:59:44.298367  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:59:44.359247  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:59:44.393841  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 19:59:44.393906  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 19:59:44.449204  547669 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:44.449268  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1014 19:59:44.467288  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:59:44.480018  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 19:59:44.480089  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 19:59:44.520293  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 19:59:44.520379  547669 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 19:59:44.524262  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:59:44.529522  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:59:44.551718  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 19:59:44.551793  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 19:59:44.589328  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:59:44.601717  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 19:59:44.601779  547669 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 19:59:44.625299  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:59:44.630524  547669 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 19:59:44.630560  547669 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 19:59:44.637997  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 19:59:44.642869  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 19:59:44.642890  547669 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 19:59:44.645171  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:44.686949  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:59:44.714979  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:59:44.715045  547669 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 19:59:44.717575  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 19:59:44.717633  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 19:59:44.761335  547669 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:59:44.761405  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 19:59:44.770695  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 19:59:44.770759  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 19:59:44.795637  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 19:59:44.795709  547669 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 19:59:44.896374  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 19:59:44.896445  547669 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 19:59:44.930521  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:59:45.003450  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:59:45.042426  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 19:59:45.042514  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 19:59:45.211484  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 19:59:45.211577  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 19:59:45.236994  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:59:45.237067  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 19:59:45.247082  547669 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:45.247169  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 19:59:45.367959  547669 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.513180877s)
	I1014 19:59:45.368131  547669 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1014 19:59:45.368054  547669 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.512536322s)
	I1014 19:59:45.368962  547669 node_ready.go:35] waiting up to 6m0s for node "addons-864166" to be "Ready" ...
	I1014 19:59:45.368092  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.069664762s)
	I1014 19:59:45.502955  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:59:45.545961  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 19:59:45.546033  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 19:59:45.549645  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:45.685871  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.326553645s)
	I1014 19:59:45.789942  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 19:59:45.790009  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 19:59:45.874855  547669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864166" context rescaled to 1 replicas
	I1014 19:59:46.158532  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 19:59:46.158610  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 19:59:46.343239  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 19:59:46.343312  547669 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 19:59:46.502294  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 19:59:46.502367  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 19:59:46.822199  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 19:59:46.822280  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 19:59:47.014149  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 19:59:47.014226  547669 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 19:59:47.297628  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1014 19:59:47.394551  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:49.319285  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.851901904s)
	I1014 19:59:49.319371  547669 addons.go:479] Verifying addon ingress=true in "addons-864166"
	I1014 19:59:49.319605  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.795156553s)
	I1014 19:59:49.319667  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.790079723s)
	I1014 19:59:49.319810  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.69444502s)
	I1014 19:59:49.319859  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.681838155s)
	I1014 19:59:49.319921  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.674722756s)
	W1014 19:59:49.319937  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:49.319950  547669 retry.go:31] will retry after 224.113929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:49.319977  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.6329595s)
	I1014 19:59:49.320177  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.389583464s)
	I1014 19:59:49.320190  547669 addons.go:479] Verifying addon registry=true in "addons-864166"
	I1014 19:59:49.320296  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.730352152s)
	I1014 19:59:49.320675  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.317134377s)
	I1014 19:59:49.320698  547669 addons.go:479] Verifying addon metrics-server=true in "addons-864166"
	I1014 19:59:49.320735  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.817704949s)
	I1014 19:59:49.322849  547669 out.go:179] * Verifying ingress addon...
	I1014 19:59:49.324849  547669 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864166 service yakd-dashboard -n yakd-dashboard
	
	I1014 19:59:49.324916  547669 out.go:179] * Verifying registry addon...
	I1014 19:59:49.327522  547669 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 19:59:49.330619  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 19:59:49.335682  547669 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 19:59:49.335709  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:49.337788  547669 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 19:59:49.337813  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:49.342388  547669 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1014 19:59:49.359924  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.81019441s)
	W1014 19:59:49.359964  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:59:49.359983  547669 retry.go:31] will retry after 137.53807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:59:49.498355  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:49.544348  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:49.840285  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:49.843042  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:49.876775  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:49.905458  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.607741773s)
	I1014 19:59:49.905502  547669 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-864166"
	I1014 19:59:49.910588  547669 out.go:179] * Verifying csi-hostpath-driver addon...
	I1014 19:59:49.913777  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 19:59:49.923092  547669 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 19:59:49.923114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:50.330781  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:50.333012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:50.416841  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:50.794612  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250219191s)
	W1014 19:59:50.794646  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:50.794666  547669 retry.go:31] will retry after 558.61315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:50.831129  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:50.833114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:50.931921  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:51.262156  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 19:59:51.262238  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:51.278982  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:51.331984  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:51.334364  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:51.354121  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:51.396515  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 19:59:51.418286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:51.421175  547669 addons.go:238] Setting addon gcp-auth=true in "addons-864166"
	I1014 19:59:51.421263  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:51.421729  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:51.446239  547669 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 19:59:51.446305  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:51.469365  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:51.832601  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:51.833655  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:51.917400  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:59:52.188260  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:52.188294  547669 retry.go:31] will retry after 315.178256ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:52.191448  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:52.194227  547669 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 19:59:52.197011  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 19:59:52.197032  547669 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 19:59:52.210964  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 19:59:52.210985  547669 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 19:59:52.225039  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:59:52.225061  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 19:59:52.239242  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:59:52.331673  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:52.333531  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:52.372410  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:52.417116  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:52.504241  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:52.779941  547669 addons.go:479] Verifying addon gcp-auth=true in "addons-864166"
	I1014 19:59:52.782963  547669 out.go:179] * Verifying gcp-auth addon...
	I1014 19:59:52.787341  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 19:59:52.796877  547669 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 19:59:52.796899  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:52.897484  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:52.897702  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:52.917329  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:53.290809  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:53.331645  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:53.334292  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:53.402964  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:53.402996  547669 retry.go:31] will retry after 458.066972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:53.417154  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:53.795812  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:53.830832  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:53.833333  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:53.861620  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:53.918119  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:54.291198  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:54.331728  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:54.333881  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:54.373593  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:54.417512  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:59:54.678096  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:54.678171  547669 retry.go:31] will retry after 1.852442892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:54.791016  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:54.831433  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:54.834058  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:54.916803  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:55.290719  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:55.330801  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:55.333098  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:55.416537  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:55.790807  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:55.830573  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:55.834002  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:55.917307  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:56.290657  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:56.331380  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:56.333230  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:56.417338  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:56.531506  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:56.791246  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:56.831822  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:56.834451  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:56.872944  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:56.917560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:57.291840  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1014 19:59:57.328945  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:57.329018  547669 retry.go:31] will retry after 2.493328732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:57.331094  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:57.333532  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:57.417297  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:57.790808  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:57.830536  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:57.833813  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:57.918330  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:58.290609  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:58.330391  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:58.333824  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:58.416839  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:58.791358  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:58.831261  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:58.834925  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:58.916868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:59.290737  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:59.331107  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:59.333067  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:59.371770  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:59.417984  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:59.791112  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:59.823570  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:59.835325  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:59.836122  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:59.917735  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:00.295035  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:00.352113  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:00.352280  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:00.425460  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:00.815502  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:00.835063  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:00.838501  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:00.918544  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:01.295281  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:01.345674  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:01.346249  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.522598993s)
	W1014 20:00:01.346285  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:01.346320  547669 retry.go:31] will retry after 2.907557369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:01.359434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:01.402302  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:01.422200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:01.836521  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:01.862388  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:01.862573  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:01.936994  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:02.293610  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:02.333011  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:02.344870  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:02.417357  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:02.792094  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:02.831154  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:02.834074  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:02.916972  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:03.290561  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:03.332189  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:03.334557  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:03.417954  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:03.790411  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:03.831032  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:03.833491  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:03.872295  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:03.917338  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:04.254659  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:04.291386  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:04.331947  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:04.334876  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:04.417095  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:04.791450  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:04.830717  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:04.833168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:04.917801  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:05.089871  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:05.089919  547669 retry.go:31] will retry after 4.213729924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:05.290726  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:05.330665  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:05.333101  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:05.417745  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:05.791792  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:05.831866  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:05.834086  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:05.916846  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:06.291368  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:06.331723  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:06.333566  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:06.372463  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:06.417199  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:06.791008  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:06.831430  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:06.833670  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:06.916926  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:07.290520  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:07.331143  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:07.333384  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:07.423721  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:07.791023  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:07.831193  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:07.834126  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:07.916979  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:08.301396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:08.331507  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:08.333449  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:08.417560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:08.791214  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:08.831543  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:08.833971  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:08.871868  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:08.917886  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:09.291150  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:09.304476  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:09.333022  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:09.336174  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:09.417629  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:09.791224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:09.831723  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:09.834116  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:09.918736  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:10.173609  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:10.173644  547669 retry.go:31] will retry after 3.694244117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:10.290651  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:10.330861  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:10.333444  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:10.417395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:10.790697  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:10.833105  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:10.834374  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:10.872915  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:10.916738  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:11.291290  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:11.331559  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:11.334031  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:11.417793  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:11.790136  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:11.831461  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:11.833318  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:11.917149  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:12.290432  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:12.338837  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:12.339199  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:12.416794  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:12.790990  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:12.831288  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:12.833411  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:12.917348  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:13.290536  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:13.331566  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:13.333966  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:13.371704  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:13.417661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:13.791210  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:13.832356  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:13.833458  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:13.868601  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:13.917434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:14.291703  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:14.331920  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:14.334649  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:14.417514  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:14.698106  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:14.698139  547669 retry.go:31] will retry after 5.682476792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:14.791441  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:14.831604  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:14.834089  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:14.916849  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:15.290927  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:15.330727  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:15.333277  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:15.372279  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:15.417432  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:15.790331  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:15.831265  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:15.833495  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:15.917282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:16.290544  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:16.330570  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:16.333876  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:16.417797  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:16.791498  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:16.830476  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:16.834068  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:16.916855  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:17.291561  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:17.331880  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:17.334625  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:17.372401  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:17.417167  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:17.790478  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:17.832011  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:17.834025  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:17.917776  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:18.290865  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:18.331366  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:18.333219  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:18.417055  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:18.791075  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:18.832075  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:18.833286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:18.917392  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:19.290622  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:19.330611  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:19.334015  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:19.372541  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:19.417706  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:19.791117  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:19.831481  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:19.833477  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:19.917253  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:20.290260  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:20.331466  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:20.333744  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:20.381745  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:20.417365  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:20.790568  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:20.831055  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:20.833698  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:20.917837  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:21.220446  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:21.220536  547669 retry.go:31] will retry after 12.384930157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:21.290123  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:21.331470  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:21.333584  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:21.418634  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:21.790969  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:21.831235  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:21.833396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:21.871992  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:21.916851  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:22.291180  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:22.332139  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:22.333322  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:22.416913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:22.791149  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:22.831032  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:22.833165  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:22.916759  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:23.291140  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:23.331288  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:23.333185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:23.417224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:23.790778  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:23.830861  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:23.833014  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:23.916673  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:24.290781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:24.330945  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:24.333301  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:24.372066  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:24.417940  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:24.790906  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:24.830707  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:24.832985  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:24.917729  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.291188  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:25.331305  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:25.333299  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:25.416978  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.794600  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:25.832812  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:25.886296  547669 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 20:00:25.886321  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:25.898953  547669 node_ready.go:49] node "addons-864166" is "Ready"
	I1014 20:00:25.898986  547669 node_ready.go:38] duration metric: took 40.530003122s for node "addons-864166" to be "Ready" ...
	I1014 20:00:25.899000  547669 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:00:25.899075  547669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:00:25.929830  547669 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 20:00:25.929857  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.931030  547669 api_server.go:72] duration metric: took 42.731405309s to wait for apiserver process to appear ...
	I1014 20:00:25.931053  547669 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:00:25.931071  547669 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:00:25.945495  547669 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:00:25.949036  547669 api_server.go:141] control plane version: v1.34.1
	I1014 20:00:25.949069  547669 api_server.go:131] duration metric: took 18.007827ms to wait for apiserver health ...
	I1014 20:00:25.949079  547669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:00:25.981649  547669 system_pods.go:59] 19 kube-system pods found
	I1014 20:00:25.981685  547669 system_pods.go:61] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending
	I1014 20:00:25.981691  547669 system_pods.go:61] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending
	I1014 20:00:25.981701  547669 system_pods.go:61] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:25.981706  547669 system_pods.go:61] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending
	I1014 20:00:25.981712  547669 system_pods.go:61] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:25.981717  547669 system_pods.go:61] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:25.981752  547669 system_pods.go:61] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:25.981763  547669 system_pods.go:61] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:25.981768  547669 system_pods.go:61] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending
	I1014 20:00:25.981772  547669 system_pods.go:61] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:25.981777  547669 system_pods.go:61] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:25.981788  547669 system_pods.go:61] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:25.981793  547669 system_pods.go:61] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending
	I1014 20:00:25.981818  547669 system_pods.go:61] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:25.981832  547669 system_pods.go:61] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending
	I1014 20:00:25.981840  547669 system_pods.go:61] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending
	I1014 20:00:25.981850  547669 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:25.981857  547669 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending
	I1014 20:00:25.981867  547669 system_pods.go:61] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending
	I1014 20:00:25.981873  547669 system_pods.go:74] duration metric: took 32.788409ms to wait for pod list to return data ...
	I1014 20:00:25.981908  547669 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:00:26.065678  547669 default_sa.go:45] found service account: "default"
	I1014 20:00:26.065710  547669 default_sa.go:55] duration metric: took 83.789152ms for default service account to be created ...
	I1014 20:00:26.065721  547669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:00:26.143814  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.143852  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending
	I1014 20:00:26.143859  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending
	I1014 20:00:26.143867  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.143872  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending
	I1014 20:00:26.143917  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.143923  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.143928  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.143932  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.143944  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending
	I1014 20:00:26.143948  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.143953  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.143989  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.144001  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending
	I1014 20:00:26.144010  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.144019  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending
	I1014 20:00:26.144025  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending
	I1014 20:00:26.144032  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.144040  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending
	I1014 20:00:26.144044  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending
	I1014 20:00:26.144070  547669 retry.go:31] will retry after 294.445937ms: missing components: kube-dns
	I1014 20:00:26.295868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:26.336592  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:26.340662  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:26.422178  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:26.451634  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.451724  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:26.451749  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:26.451787  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.451813  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:26.451835  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.451859  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.451890  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.451915  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.451941  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:26.451963  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.451994  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.452020  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.452044  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:26.452067  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.452100  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:26.452130  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:26.452156  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.452183  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.452215  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:26.452251  547669 retry.go:31] will retry after 265.969577ms: missing components: kube-dns
	I1014 20:00:26.723698  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.723789  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:26.723816  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:26.723860  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.723891  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:26.723914  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.723938  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.723970  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.723998  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.724028  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:26.724051  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.724094  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.724123  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.724149  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:26.724174  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.724206  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:26.724233  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:26.724257  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.724284  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.724392  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:26.724431  547669 retry.go:31] will retry after 463.425484ms: missing components: kube-dns
	I1014 20:00:26.790895  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:26.831159  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:26.833395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:26.918012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:27.193260  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:27.193305  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:27.193315  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:27.193324  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:27.193330  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:27.193336  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:27.193342  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:27.193351  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:27.193356  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:27.193369  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:27.193375  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:27.193380  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:27.193392  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:27.193398  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:27.193412  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:27.193418  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:27.193425  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:27.193431  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.193440  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.193446  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:27.193467  547669 retry.go:31] will retry after 572.588227ms: missing components: kube-dns
	I1014 20:00:27.290859  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:27.333900  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:27.345018  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:27.420045  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:27.771640  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:27.771725  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Running
	I1014 20:00:27.771752  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:27.771797  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:27.771825  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:27.771845  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:27.771869  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:27.771900  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:27.771920  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:27.771941  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:27.771962  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:27.771984  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:27.772019  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:27.772041  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:27.772066  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:27.772098  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:27.772125  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:27.772150  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.772177  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.772209  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Running
	I1014 20:00:27.772243  547669 system_pods.go:126] duration metric: took 1.706513602s to wait for k8s-apps to be running ...
	I1014 20:00:27.772267  547669 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:00:27.772384  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:00:27.786271  547669 system_svc.go:56] duration metric: took 13.996661ms WaitForService to wait for kubelet
	I1014 20:00:27.786300  547669 kubeadm.go:586] duration metric: took 44.586678275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:00:27.786320  547669 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:00:27.790619  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:27.791439  547669 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:00:27.791470  547669 node_conditions.go:123] node cpu capacity is 2
	I1014 20:00:27.791483  547669 node_conditions.go:105] duration metric: took 5.157497ms to run NodePressure ...
	I1014 20:00:27.791495  547669 start.go:241] waiting for startup goroutines ...
	I1014 20:00:27.831710  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:27.833646  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:27.918551  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:28.291228  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:28.392121  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:28.392390  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:28.418945  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:28.790640  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:28.831142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:28.833505  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:28.919189  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:29.290777  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:29.331250  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:29.333242  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:29.420815  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:29.791354  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:29.833265  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:29.835464  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:29.918500  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:30.293998  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:30.333254  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:30.335890  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:30.417452  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:30.792922  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:30.832639  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:30.835590  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:30.918301  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:31.290618  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:31.332496  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:31.335753  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:31.420582  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:31.790969  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:31.834488  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:31.835097  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:31.927282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:32.291410  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:32.331337  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:32.333627  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:32.417956  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:32.791451  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:32.832609  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:32.834449  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:32.918119  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:33.290666  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:33.331061  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:33.333761  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:33.417702  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:33.606205  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:33.791419  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:33.832888  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:33.834672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:33.917185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:34.290974  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:34.331242  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:34.333930  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:34.417429  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:34.665080  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.058825491s)
	W1014 20:00:34.665157  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:34.665193  547669 retry.go:31] will retry after 24.50091787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:34.792200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:34.832239  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:34.835057  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:34.918545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:35.291130  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:35.331926  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:35.334951  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:35.418578  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:35.791059  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:35.831985  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:35.835060  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:35.918091  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:36.291713  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:36.331310  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:36.333880  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:36.417882  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:36.790892  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:36.832664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:36.834567  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:36.918483  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:37.291050  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:37.392795  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:37.392838  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:37.416965  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:37.791087  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:37.833588  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:37.835812  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:37.917680  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:38.291133  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:38.333167  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:38.335059  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:38.418216  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:38.791902  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:38.831363  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:38.834163  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:38.918000  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:39.291175  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:39.331320  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:39.333793  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:39.420029  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:39.791661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:39.835377  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:39.836373  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:39.919027  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:40.291397  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:40.332706  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:40.334448  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:40.418416  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:40.792458  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:40.834137  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:40.835772  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:40.917588  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:41.297213  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:41.333108  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:41.335241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:41.418255  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:41.791514  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:41.830718  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:41.833272  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:41.917598  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:42.291224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:42.332980  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:42.335509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:42.417583  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:42.791799  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:42.831236  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:42.833840  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:42.917114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:43.290618  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:43.330645  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:43.333241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:43.417626  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:43.791778  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:43.831348  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:43.833850  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:43.917290  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:44.294631  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:44.391155  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:44.391362  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:44.418427  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:44.790847  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:44.831486  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:44.833661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:44.917913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:45.291588  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:45.332586  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:45.337335  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:45.417900  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:45.790822  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:45.831056  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:45.833545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:45.917776  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:46.290728  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:46.331536  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:46.334450  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:46.418742  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:46.792055  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:46.832766  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:46.835166  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:46.917855  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:47.292899  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:47.331199  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:47.333697  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:47.418311  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:47.791185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:47.831396  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:47.834442  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:47.917705  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:48.290088  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:48.331142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:48.333550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:48.417850  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:48.791533  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:48.830605  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:48.834383  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:48.917648  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:49.291533  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:49.330599  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:49.340502  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:49.418391  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:49.791464  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:49.832891  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:49.835350  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:49.918624  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:50.290877  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:50.331590  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:50.334740  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:50.418617  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:50.791077  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:50.831164  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:50.832940  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:50.917200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:51.290573  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:51.330549  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:51.334160  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:51.417761  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:51.791438  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:51.831562  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:51.834285  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:51.917564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:52.291315  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:52.333885  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:52.336203  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:52.418421  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:52.792245  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:52.832506  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:52.833766  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:52.917003  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:53.291169  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:53.331207  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:53.333469  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:53.417910  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:53.790991  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:53.831026  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:53.833648  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:53.917921  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:54.291003  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:54.332037  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:54.334396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:54.417680  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:54.791287  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:54.832081  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:54.836112  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:54.918296  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:55.290896  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:55.332696  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:55.334500  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:55.418126  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:55.790452  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:55.831743  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:55.834677  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:55.918168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:56.291242  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:56.331664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:56.334593  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:56.417663  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:56.791148  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:56.832084  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:56.833672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:56.918054  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:57.291681  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:57.330916  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:57.333728  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:57.416968  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:57.791439  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:57.831727  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:57.833754  547669 kapi.go:107] duration metric: took 1m8.503132272s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 20:00:57.917282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:58.290781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:58.331430  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:58.418408  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:58.792649  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:58.831126  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:58.917803  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:59.166923  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:59.302851  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:59.334900  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:59.420168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:59.790898  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:59.833062  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:59.918818  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:00.330902  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:00.355811  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:00.419987  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:00.791878  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:00.831206  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:00.893760  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.726801858s)
	W1014 20:01:00.893802  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:01:00.893821  547669 retry.go:31] will retry after 20.531033112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:01:00.917284  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:01.291917  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:01.332675  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:01.418948  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:01.791007  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:01.831462  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:01.917814  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:02.290964  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:02.331358  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:02.417647  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:02.794657  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:02.832175  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:02.945009  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:03.290531  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:03.330931  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:03.417564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:03.791438  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:03.831949  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:03.917732  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:04.290234  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:04.331251  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:04.418233  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:04.790672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:04.833142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:04.917709  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:05.291263  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:05.331508  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:05.417834  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:05.791185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:05.831699  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:05.917206  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:06.290426  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:06.330533  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:06.417738  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:06.790868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:06.832127  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:06.922509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:07.291297  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:07.331445  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:07.418129  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:07.790564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:07.831423  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:07.918208  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:08.290275  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:08.331278  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:08.417366  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:08.790662  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:08.832084  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:08.917986  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:09.291509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:09.331684  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:09.417208  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:09.791395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:09.831858  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:09.917729  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:10.291757  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:10.332810  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:10.417396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:10.791270  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:10.831490  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:10.919822  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:11.290886  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:11.331361  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:11.418994  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:11.790630  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:11.832008  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:11.933789  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:12.291617  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:12.331129  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:12.417903  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:12.791490  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:12.831119  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:12.919640  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:13.290545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:13.331683  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:13.419264  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:13.793810  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:13.831141  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:13.917781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:14.290722  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:14.331186  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:14.420492  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:14.791167  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:14.841216  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:14.918400  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:15.290842  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:15.331455  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:15.418122  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:15.790605  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:15.830816  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:15.917821  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:16.291185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:16.331915  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:16.417698  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:16.791532  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:16.830743  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:16.918385  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:17.290944  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:17.331211  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:17.416930  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:17.792188  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:17.831864  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:17.917628  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:18.291148  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:18.331766  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:18.417436  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:18.790735  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:18.831083  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:18.917696  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:19.291142  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:19.331079  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:19.417591  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:19.790410  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:19.831257  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:19.917241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:20.291403  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:20.331663  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:20.417800  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:20.790825  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:20.830983  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:20.918280  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:21.290598  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:21.330720  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:21.417475  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:21.425541  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:01:21.791384  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:21.831836  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:21.918604  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:01:22.261734  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:01:22.261858  547669 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:01:22.291523  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:22.331366  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:22.417286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:22.790491  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:22.830576  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:22.918048  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:23.291046  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:23.331143  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:23.417792  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:23.790865  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:23.831187  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:23.917228  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:24.290668  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:24.330947  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:24.418455  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:24.791525  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:24.830876  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:24.918633  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:25.291466  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:25.332017  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:25.417538  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:25.791137  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:25.831301  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:25.919602  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:26.291717  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:26.331591  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:26.420746  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:26.793299  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:26.831896  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:26.921264  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:27.290578  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:27.330903  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:27.417101  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:27.813806  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:27.900010  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:27.917064  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:28.290117  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:28.331134  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:28.417815  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:28.803074  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:28.843102  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:28.917797  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:29.290918  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:29.331664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:29.418646  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:29.791550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:29.831393  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:29.918152  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:30.292033  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:30.332749  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:30.419556  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:30.791019  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:30.831571  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:30.918501  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:31.309415  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:31.332151  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:31.417625  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:31.791333  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:31.832437  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:31.918037  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:32.291033  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:32.392347  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:32.417550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:32.790831  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:32.830923  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:32.917695  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:33.291310  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:33.331330  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:33.417590  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:33.791100  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:33.863646  547669 kapi.go:107] duration metric: took 1m44.536119588s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 20:01:33.917714  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:34.291050  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:34.418669  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:34.790913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:34.917694  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:35.295990  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:35.417961  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:35.790402  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:35.918157  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:36.291268  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:36.417804  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:36.789945  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:36.917548  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:37.291424  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:37.418329  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:37.790425  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:37.917660  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:38.292560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:38.418012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:38.791711  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:38.918873  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:39.291408  547669 kapi.go:107] duration metric: took 1m46.504065867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 20:01:39.294647  547669 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-864166 cluster.
	I1014 20:01:39.297612  547669 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 20:01:39.300552  547669 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 20:01:39.418434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:39.916744  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:40.418106  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:40.918137  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:41.417563  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:41.924049  547669 kapi.go:107] duration metric: took 1m52.010270923s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 20:01:41.927242  547669 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1014 20:01:41.930048  547669 addons.go:514] duration metric: took 1m58.729976641s for enable addons: enabled=[nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1014 20:01:41.930107  547669 start.go:246] waiting for cluster config update ...
	I1014 20:01:41.930127  547669 start.go:255] writing updated cluster config ...
	I1014 20:01:41.930438  547669 ssh_runner.go:195] Run: rm -f paused
	I1014 20:01:41.934536  547669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:01:41.938622  547669 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9rwrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.945124  547669 pod_ready.go:94] pod "coredns-66bc5c9577-9rwrs" is "Ready"
	I1014 20:01:41.945153  547669 pod_ready.go:86] duration metric: took 6.502213ms for pod "coredns-66bc5c9577-9rwrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.947337  547669 pod_ready.go:83] waiting for pod "etcd-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.951452  547669 pod_ready.go:94] pod "etcd-addons-864166" is "Ready"
	I1014 20:01:41.951475  547669 pod_ready.go:86] duration metric: took 4.112058ms for pod "etcd-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.953399  547669 pod_ready.go:83] waiting for pod "kube-apiserver-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.957365  547669 pod_ready.go:94] pod "kube-apiserver-addons-864166" is "Ready"
	I1014 20:01:41.957391  547669 pod_ready.go:86] duration metric: took 3.966004ms for pod "kube-apiserver-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.959441  547669 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.338939  547669 pod_ready.go:94] pod "kube-controller-manager-addons-864166" is "Ready"
	I1014 20:01:42.338980  547669 pod_ready.go:86] duration metric: took 379.51527ms for pod "kube-controller-manager-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.538690  547669 pod_ready.go:83] waiting for pod "kube-proxy-qkhq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.938156  547669 pod_ready.go:94] pod "kube-proxy-qkhq5" is "Ready"
	I1014 20:01:42.938196  547669 pod_ready.go:86] duration metric: took 399.478422ms for pod "kube-proxy-qkhq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.138831  547669 pod_ready.go:83] waiting for pod "kube-scheduler-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.538508  547669 pod_ready.go:94] pod "kube-scheduler-addons-864166" is "Ready"
	I1014 20:01:43.538542  547669 pod_ready.go:86] duration metric: took 399.680812ms for pod "kube-scheduler-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.538555  547669 pod_ready.go:40] duration metric: took 1.603985321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:01:43.592686  547669 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 20:01:43.597806  547669 out.go:179] * Done! kubectl is now configured to use "addons-864166" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:04:38 addons-864166 crio[831]: time="2025-10-14T20:04:38.18792682Z" level=info msg="Removed container b9e8b36047ce6e28b05f9a5df5ef4e636357ca4981f5dbdf042359c394bd6cb8: kube-system/registry-creds-764b6fb674-6rp57/registry-creds" id=d2a2474d-de67-4e78-b598-f0014378bf00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.731669925Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6kd6b/POD" id=2c285937-5afc-498c-8c55-2966ceaa67ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.732191081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.748608173Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6kd6b Namespace:default ID:c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488 UID:f80234d7-c06f-44c0-93f8-d4f33fb03b76 NetNS:/var/run/netns/b99ff9e0-44fd-413f-9db0-e478da820bb1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c3cc10}] Aliases:map[]}"
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.748663624Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6kd6b to CNI network \"kindnet\" (type=ptp)"
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.789390192Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6kd6b Namespace:default ID:c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488 UID:f80234d7-c06f-44c0-93f8-d4f33fb03b76 NetNS:/var/run/netns/b99ff9e0-44fd-413f-9db0-e478da820bb1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c3cc10}] Aliases:map[]}"
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.789548217Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6kd6b for CNI network kindnet (type=ptp)"
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.803775534Z" level=info msg="Ran pod sandbox c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488 with infra container: default/hello-world-app-5d498dc89-6kd6b/POD" id=2c285937-5afc-498c-8c55-2966ceaa67ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.805313968Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9d9bfa4e-bf25-4252-bede-704386b8e9a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.805453572Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9d9bfa4e-bf25-4252-bede-704386b8e9a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.805643154Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9d9bfa4e-bf25-4252-bede-704386b8e9a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.809894956Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=5343fb25-334a-4d5f-b564-a6a67f0068e3 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:04:42 addons-864166 crio[831]: time="2025-10-14T20:04:42.814474302Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.400827568Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=5343fb25-334a-4d5f-b564-a6a67f0068e3 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.401619284Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=32a07fbe-4c2c-41e1-a736-3fe00a5326ca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.413321873Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=690a9f2f-5a2b-45df-8de5-e71ed82e32cd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.420761107Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-6kd6b/hello-world-app" id=5ed67384-3ef8-457d-add4-5f3102740064 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.422022729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.449882377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.450248765Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a216cfba8d9ec33bab4eab1d5b2c29a99f2c2c2cdef33154f35d14e6f54905e7/merged/etc/passwd: no such file or directory"
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.450281471Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a216cfba8d9ec33bab4eab1d5b2c29a99f2c2c2cdef33154f35d14e6f54905e7/merged/etc/group: no such file or directory"
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.451062282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.479003884Z" level=info msg="Created container fab0ef9bac395d1b6e43843a966820e40e7e77d55e065176be9b03b0dbd30097: default/hello-world-app-5d498dc89-6kd6b/hello-world-app" id=5ed67384-3ef8-457d-add4-5f3102740064 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.482056481Z" level=info msg="Starting container: fab0ef9bac395d1b6e43843a966820e40e7e77d55e065176be9b03b0dbd30097" id=49c4478f-8ec3-4393-a0dd-309046dbfc13 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:04:43 addons-864166 crio[831]: time="2025-10-14T20:04:43.487552845Z" level=info msg="Started container" PID=7492 containerID=fab0ef9bac395d1b6e43843a966820e40e7e77d55e065176be9b03b0dbd30097 description=default/hello-world-app-5d498dc89-6kd6b/hello-world-app id=49c4478f-8ec3-4393-a0dd-309046dbfc13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	fab0ef9bac395       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   c59e285651bd5       hello-world-app-5d498dc89-6kd6b             default
	f0b89e652073d       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             6 seconds ago            Exited              registry-creds                           1                   01cde6344d3e7       registry-creds-764b6fb674-6rp57             kube-system
	dc10023efdeba       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   6c86a7b16d0fa       nginx                                       default
	e386ad6e060c0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   97de4626f0493       busybox                                     default
	78ef6dffdb260       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	3ed114fa3e5da       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	fc727586dde63       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	92f9c104d49fb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   bea82525dfd78       gcp-auth-78565c9fb4-gxbj2                   gcp-auth
	7ef76141e9242       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	e0e7e8627c210       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   204117cfa07fd       ingress-nginx-controller-675c5ddd98-gph67   ingress-nginx
	6ef3702df3f19       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   186b7daebd493       gadget-jpdp2                                gadget
	c5c1390f4e419       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	08cb4838bbaf5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   f55b6593b7e29       yakd-dashboard-5ff678cb9-rfjnt              yakd-dashboard
	67f5b6eccde75       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   eab771aef57f4       ingress-nginx-admission-patch-hmv8p         ingress-nginx
	f351a573847cd       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   11cbf0d919237       nvidia-device-plugin-daemonset-cgtft        kube-system
	6b79820d7c3cc       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   5ee9ff1e32616       snapshot-controller-7d9fbc56b8-k524x        kube-system
	5c84e205968bf       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   a38edbbc35c59       registry-proxy-ww88v                        kube-system
	bb4fccdbcf0f5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   71392ba204600       csi-hostpath-attacher-0                     kube-system
	1313169fc23bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   6d95eeeebb5fb       ingress-nginx-admission-create-fzdhr        ingress-nginx
	3ca6b7f2cc2b0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   dc7e7d460397d       snapshot-controller-7d9fbc56b8-c992l        kube-system
	6e40d4e7135a0       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   b054b9bb65849       csi-hostpath-resizer-0                      kube-system
	51b5289a11b57       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   57c59cc0f565c       cloud-spanner-emulator-86bd5cbb97-gz4h4     default
	3e1c1eae40cfd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	8b466c8617b98       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   835d1e5a47c51       registry-6b586f9694-ph85v                   kube-system
	576ef271047e0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   0060b955dbf11       local-path-provisioner-648f6765c9-g88bj     local-path-storage
	b80cf57eb9e2b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   bbc57ed32489e       kube-ingress-dns-minikube                   kube-system
	a327e10ba7623       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   1063f690cee69       metrics-server-85b7d694d7-q2bhj             kube-system
	7d43ec8e4714e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   8386ddb409201       coredns-66bc5c9577-9rwrs                    kube-system
	a97b20dece5ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   46c0f84b66bc0       storage-provisioner                         kube-system
	42ea884de055b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   39cbfde4117ab       kube-proxy-qkhq5                            kube-system
	85281702b4961       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   de5337d58aac0       kindnet-68wfd                               kube-system
	b10c05c5f1576       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   5bcedea5e98b2       kube-apiserver-addons-864166                kube-system
	9a2e8bf9aef9f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   bb2583320595d       kube-scheduler-addons-864166                kube-system
	e4d0d5d58460a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   4b14e8eefb0b9       kube-controller-manager-addons-864166       kube-system
	2db14186188f0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   98bf5cc734261       etcd-addons-864166                          kube-system
	
	
	==> coredns [7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0] <==
	[INFO] 10.244.0.9:34505 - 34696 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002366129s
	[INFO] 10.244.0.9:34505 - 44228 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118s
	[INFO] 10.244.0.9:34505 - 21731 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000194062s
	[INFO] 10.244.0.9:38546 - 29470 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000202809s
	[INFO] 10.244.0.9:38546 - 29248 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00026827s
	[INFO] 10.244.0.9:38341 - 20159 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113233s
	[INFO] 10.244.0.9:38341 - 19954 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073937s
	[INFO] 10.244.0.9:58158 - 9788 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087197s
	[INFO] 10.244.0.9:58158 - 9591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078385s
	[INFO] 10.244.0.9:33675 - 21311 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001235808s
	[INFO] 10.244.0.9:33675 - 21097 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001177928s
	[INFO] 10.244.0.9:51865 - 41272 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113946s
	[INFO] 10.244.0.9:51865 - 41100 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158632s
	[INFO] 10.244.0.21:59072 - 4005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212581s
	[INFO] 10.244.0.21:41608 - 55500 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212672s
	[INFO] 10.244.0.21:33590 - 62115 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201873s
	[INFO] 10.244.0.21:49706 - 15678 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000067496s
	[INFO] 10.244.0.21:55963 - 61985 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000176413s
	[INFO] 10.244.0.21:44971 - 59639 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066224s
	[INFO] 10.244.0.21:59888 - 958 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002467431s
	[INFO] 10.244.0.21:40874 - 31423 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002297493s
	[INFO] 10.244.0.21:44953 - 17366 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003086813s
	[INFO] 10.244.0.21:33450 - 9935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003868468s
	[INFO] 10.244.0.23:59619 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000177061s
	[INFO] 10.244.0.23:45935 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163334s
	
	
	==> describe nodes <==
	Name:               addons-864166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-864166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=addons-864166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T19_59_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864166
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864166"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 19:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864166
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:04:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:04:25 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:04:25 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:04:25 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:04:25 +0000   Tue, 14 Oct 2025 20:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-864166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 39850f21655f4698832fc12574d92819
	  System UUID:                379c39e7-3898-4b4a-b13c-d3b0f6833857
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-86bd5cbb97-gz4h4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-6kd6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-jpdp2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gcp-auth                    gcp-auth-78565c9fb4-gxbj2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gph67    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-9rwrs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpathplugin-svw4p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 etcd-addons-864166                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m6s
	  kube-system                 kindnet-68wfd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-addons-864166                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-864166        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-proxy-qkhq5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-864166                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 metrics-server-85b7d694d7-q2bhj              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m56s
	  kube-system                 nvidia-device-plugin-daemonset-cgtft         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 registry-6b586f9694-ph85v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-creds-764b6fb674-6rp57              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 registry-proxy-ww88v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-c992l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-k524x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  local-path-storage          local-path-provisioner-648f6765c9-g88bj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rfjnt               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m58s  kube-proxy       
	  Normal   Starting                 5m7s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m7s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s   kubelet          Node addons-864166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s   kubelet          Node addons-864166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s   kubelet          Node addons-864166 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s   node-controller  Node addons-864166 event: Registered Node addons-864166 in Controller
	  Normal   NodeReady                4m19s  kubelet          Node addons-864166 status is now: NodeReady
	
	
	==> dmesg <==
	[ +31.622660] overlayfs: idmapped layers are currently not supported
	[Oct14 19:29] overlayfs: idmapped layers are currently not supported
	[Oct14 19:30] overlayfs: idmapped layers are currently not supported
	[Oct14 19:32] overlayfs: idmapped layers are currently not supported
	[Oct14 19:38] overlayfs: idmapped layers are currently not supported
	[Oct14 19:41] overlayfs: idmapped layers are currently not supported
	[ +36.095229] overlayfs: idmapped layers are currently not supported
	[Oct14 19:42] overlayfs: idmapped layers are currently not supported
	[ +18.587469] overlayfs: idmapped layers are currently not supported
	[Oct14 19:44] overlayfs: idmapped layers are currently not supported
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b] <==
	{"level":"warn","ts":"2025-10-14T19:59:34.084424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.103204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.134340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.170342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.195092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.226880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.245899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.323660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.352682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.404122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.437954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.439904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.462870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.488994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.534111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.561828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.568167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.591659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.684408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:50.008387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:50.029070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.316968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.336533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.486929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.501956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58858","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [92f9c104d49fbb70f558252b04eb0b8b220a5e848af1bfa501480caaa1efebd9] <==
	2025/10/14 20:01:38 GCP Auth Webhook started!
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:17854: remote error: tls: bad certificate
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:25338: remote error: tls: bad certificate
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:23767: remote error: tls: bad certificate
	2025/10/14 20:02:05 Ready to marshal response ...
	2025/10/14 20:02:05 Ready to write response ...
	2025/10/14 20:02:11 Ready to marshal response ...
	2025/10/14 20:02:11 Ready to write response ...
	2025/10/14 20:02:21 Ready to marshal response ...
	2025/10/14 20:02:21 Ready to write response ...
	2025/10/14 20:02:42 Ready to marshal response ...
	2025/10/14 20:02:42 Ready to write response ...
	2025/10/14 20:02:52 Ready to marshal response ...
	2025/10/14 20:02:52 Ready to write response ...
	2025/10/14 20:02:52 Ready to marshal response ...
	2025/10/14 20:02:52 Ready to write response ...
	2025/10/14 20:03:00 Ready to marshal response ...
	2025/10/14 20:03:00 Ready to write response ...
	2025/10/14 20:04:42 Ready to marshal response ...
	2025/10/14 20:04:42 Ready to write response ...
	
	
	==> kernel <==
	 20:04:44 up  2:47,  0 user,  load average: 1.50, 2.14, 2.43
	Linux addons-864166 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c] <==
	I1014 20:02:35.294046       1 main.go:301] handling current node
	I1014 20:02:45.292152       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:02:45.292284       1 main.go:301] handling current node
	I1014 20:02:55.292928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:02:55.292961       1 main.go:301] handling current node
	I1014 20:03:05.292005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:05.292038       1 main.go:301] handling current node
	I1014 20:03:15.291972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:15.292007       1 main.go:301] handling current node
	I1014 20:03:25.296410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:25.296446       1 main.go:301] handling current node
	I1014 20:03:35.292392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:35.292434       1 main.go:301] handling current node
	I1014 20:03:45.300497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:45.300632       1 main.go:301] handling current node
	I1014 20:03:55.298208       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:03:55.298242       1 main.go:301] handling current node
	I1014 20:04:05.300410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:04:05.300524       1 main.go:301] handling current node
	I1014 20:04:15.292188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:04:15.292225       1 main.go:301] handling current node
	I1014 20:04:25.296001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:04:25.296036       1 main.go:301] handling current node
	I1014 20:04:35.301070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:04:35.301105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:00:32.937895       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1014 20:00:32.937916       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:00:32.939024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:00:36.945920       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:00:36.945984       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1014 20:00:36.947869       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.123:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1014 20:00:37.000064       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1014 20:00:37.024929       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W1014 20:01:44.037483       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.037531       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	W1014 20:01:44.277193       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.277287       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	W1014 20:01:44.374485       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.374588       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	I1014 20:02:21.776033       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 20:02:22.074232       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.13.98"}
	I1014 20:02:22.273215       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1014 20:02:24.185732       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1014 20:04:42.601513       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.146.45"}
	
	
	==> kube-controller-manager [e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542] <==
	I1014 19:59:42.351988       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 19:59:42.351999       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 19:59:42.352585       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 19:59:42.352602       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 19:59:42.352611       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 19:59:42.355866       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 19:59:42.356068       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 19:59:42.356133       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 19:59:42.356165       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 19:59:42.356193       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 19:59:42.366635       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1014 19:59:42.372398       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-864166" podCIDRs=["10.244.0.0/24"]
	I1014 19:59:42.400989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 19:59:42.401013       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 19:59:42.401020       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 19:59:42.467284       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1014 19:59:48.036978       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1014 20:00:12.309501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:00:12.309668       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1014 20:00:12.309718       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1014 20:00:12.410449       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:00:12.475209       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1014 20:00:12.479685       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1014 20:00:12.580841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:00:27.360085       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d] <==
	I1014 19:59:45.211201       1 server_linux.go:53] "Using iptables proxy"
	I1014 19:59:45.381935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 19:59:45.501874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 19:59:45.501916       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 19:59:45.502007       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 19:59:45.543195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 19:59:45.543254       1 server_linux.go:132] "Using iptables Proxier"
	I1014 19:59:45.555999       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 19:59:45.556330       1 server.go:527] "Version info" version="v1.34.1"
	I1014 19:59:45.556352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 19:59:45.557721       1 config.go:200] "Starting service config controller"
	I1014 19:59:45.557732       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 19:59:45.557756       1 config.go:106] "Starting endpoint slice config controller"
	I1014 19:59:45.557761       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 19:59:45.557773       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 19:59:45.557777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 19:59:45.558431       1 config.go:309] "Starting node config controller"
	I1014 19:59:45.558438       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 19:59:45.558444       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 19:59:45.658890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 19:59:45.658928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 19:59:45.658972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5] <==
	E1014 19:59:35.495581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:59:35.495706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 19:59:35.495858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 19:59:35.495964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 19:59:35.496060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 19:59:35.496145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:59:35.496269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:59:35.496368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:59:35.496505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 19:59:35.496567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 19:59:35.496658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:59:35.494753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 19:59:36.314025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 19:59:36.314182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 19:59:36.362504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 19:59:36.362664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:59:36.425670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:59:36.481820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:59:36.510151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:59:36.572937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:59:36.607415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 19:59:36.637726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 19:59:36.669497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 19:59:36.689035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1014 19:59:37.149859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:03:03 addons-864166 kubelet[1310]: I1014 20:03:03.572038    1310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6msst\" (UniqueName: \"kubernetes.io/projected/e7fdc9ed-b362-41dd-86cd-fd64f8ee3b5e-kube-api-access-6msst\") on node \"addons-864166\" DevicePath \"\""
	Oct 14 20:03:03 addons-864166 kubelet[1310]: I1014 20:03:03.992775    1310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fdc9ed-b362-41dd-86cd-fd64f8ee3b5e" path="/var/lib/kubelet/pods/e7fdc9ed-b362-41dd-86cd-fd64f8ee3b5e/volumes"
	Oct 14 20:03:04 addons-864166 kubelet[1310]: I1014 20:03:04.335015    1310 scope.go:117] "RemoveContainer" containerID="b33f9969e0fcbfb3878178cd3d6329ee822e504d9fbcfbb3e841ebd62043998a"
	Oct 14 20:03:15 addons-864166 kubelet[1310]: I1014 20:03:15.990540    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-ph85v" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:03:38 addons-864166 kubelet[1310]: I1014 20:03:38.109896    1310 scope.go:117] "RemoveContainer" containerID="1521903af7888920f7286b8438bfb228cb0c9aee3b04a143cc15b2dc52be47b8"
	Oct 14 20:03:38 addons-864166 kubelet[1310]: E1014 20:03:38.115523    1310 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/default_test-local-path_08e4a9b6-5169-424c-950a-eee199d55833/busybox/0.log" to get inode usage: stat /var/log/pods/default_test-local-path_08e4a9b6-5169-424c-950a-eee199d55833/busybox/0.log: no such file or directory
	Oct 14 20:03:38 addons-864166 kubelet[1310]: I1014 20:03:38.128682    1310 scope.go:117] "RemoveContainer" containerID="1c05e46f88294949785b696a633cd93f019e7ca5a95d047240ecc2a45b65a063"
	Oct 14 20:03:40 addons-864166 kubelet[1310]: I1014 20:03:40.990266    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ww88v" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:03:52 addons-864166 kubelet[1310]: I1014 20:03:52.989859    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-cgtft" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:27 addons-864166 kubelet[1310]: I1014 20:04:27.991320    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-ph85v" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:35 addons-864166 kubelet[1310]: I1014 20:04:35.791172    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6rp57" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:37 addons-864166 kubelet[1310]: I1014 20:04:37.676790    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6rp57" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:37 addons-864166 kubelet[1310]: I1014 20:04:37.677287    1310 scope.go:117] "RemoveContainer" containerID="b9e8b36047ce6e28b05f9a5df5ef4e636357ca4981f5dbdf042359c394bd6cb8"
	Oct 14 20:04:38 addons-864166 kubelet[1310]: I1014 20:04:38.173156    1310 scope.go:117] "RemoveContainer" containerID="b9e8b36047ce6e28b05f9a5df5ef4e636357ca4981f5dbdf042359c394bd6cb8"
	Oct 14 20:04:38 addons-864166 kubelet[1310]: I1014 20:04:38.681793    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6rp57" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:38 addons-864166 kubelet[1310]: I1014 20:04:38.682320    1310 scope.go:117] "RemoveContainer" containerID="f0b89e652073d91eb46ee1c85f6a0b6ea3844ead402361d649e30e594ed7dfa1"
	Oct 14 20:04:38 addons-864166 kubelet[1310]: E1014 20:04:38.682518    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6rp57_kube-system(16928613-b3d8-4dca-854f-3b08843dd017)\"" pod="kube-system/registry-creds-764b6fb674-6rp57" podUID="16928613-b3d8-4dca-854f-3b08843dd017"
	Oct 14 20:04:39 addons-864166 kubelet[1310]: I1014 20:04:39.685009    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6rp57" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:39 addons-864166 kubelet[1310]: I1014 20:04:39.685069    1310 scope.go:117] "RemoveContainer" containerID="f0b89e652073d91eb46ee1c85f6a0b6ea3844ead402361d649e30e594ed7dfa1"
	Oct 14 20:04:39 addons-864166 kubelet[1310]: E1014 20:04:39.685213    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6rp57_kube-system(16928613-b3d8-4dca-854f-3b08843dd017)\"" pod="kube-system/registry-creds-764b6fb674-6rp57" podUID="16928613-b3d8-4dca-854f-3b08843dd017"
	Oct 14 20:04:41 addons-864166 kubelet[1310]: I1014 20:04:41.990917    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ww88v" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 20:04:42 addons-864166 kubelet[1310]: I1014 20:04:42.480027    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f80234d7-c06f-44c0-93f8-d4f33fb03b76-gcp-creds\") pod \"hello-world-app-5d498dc89-6kd6b\" (UID: \"f80234d7-c06f-44c0-93f8-d4f33fb03b76\") " pod="default/hello-world-app-5d498dc89-6kd6b"
	Oct 14 20:04:42 addons-864166 kubelet[1310]: I1014 20:04:42.480293    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstsx\" (UniqueName: \"kubernetes.io/projected/f80234d7-c06f-44c0-93f8-d4f33fb03b76-kube-api-access-lstsx\") pod \"hello-world-app-5d498dc89-6kd6b\" (UID: \"f80234d7-c06f-44c0-93f8-d4f33fb03b76\") " pod="default/hello-world-app-5d498dc89-6kd6b"
	Oct 14 20:04:42 addons-864166 kubelet[1310]: W1014 20:04:42.800958    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/crio-c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488 WatchSource:0}: Error finding container c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488: Status 404 returned error can't find the container with id c59e285651bd5a1693682fbc4c2c0553d606ca3ff8c2b3cb8d1f3eeffd4c9488
	Oct 14 20:04:43 addons-864166 kubelet[1310]: I1014 20:04:43.722202    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-6kd6b" podStartSLOduration=1.12030303 podStartE2EDuration="1.722183538s" podCreationTimestamp="2025-10-14 20:04:42 +0000 UTC" firstStartedPulling="2025-10-14 20:04:42.80597984 +0000 UTC m=+304.979085401" lastFinishedPulling="2025-10-14 20:04:43.407860356 +0000 UTC m=+305.580965909" observedRunningTime="2025-10-14 20:04:43.721072745 +0000 UTC m=+305.894178298" watchObservedRunningTime="2025-10-14 20:04:43.722183538 +0000 UTC m=+305.895289091"
	
	
	==> storage-provisioner [a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd] <==
	W1014 20:04:19.827483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:21.830537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:21.837545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:23.841187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:23.845693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:25.849446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:25.856658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:27.859990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:27.865813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:29.869455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:29.873983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:31.877118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:31.883537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:33.886205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:33.892836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:35.895955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:35.905019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:37.909823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:37.915480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:39.918124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:39.922894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:41.925580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:41.930191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:43.936304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:04:43.942412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-864166 -n addons-864166
helpers_test.go:269: (dbg) Run:  kubectl --context addons-864166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p: exit status 1 (110.579117ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fzdhr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hmv8p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (290.30038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:04:45.980055  557812 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:04:45.980874  557812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:04:45.980896  557812 out.go:374] Setting ErrFile to fd 2...
	I1014 20:04:45.980903  557812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:04:45.981226  557812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:04:45.981622  557812 mustload.go:65] Loading cluster: addons-864166
	I1014 20:04:45.982070  557812 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:04:45.982117  557812 addons.go:606] checking whether the cluster is paused
	I1014 20:04:45.982298  557812 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:04:45.982338  557812 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:04:45.983128  557812 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:04:46.008904  557812 ssh_runner.go:195] Run: systemctl --version
	I1014 20:04:46.008970  557812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:04:46.028228  557812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:04:46.139167  557812 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:04:46.139259  557812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:04:46.173847  557812 cri.go:89] found id: "f0b89e652073d91eb46ee1c85f6a0b6ea3844ead402361d649e30e594ed7dfa1"
	I1014 20:04:46.173877  557812 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:04:46.173883  557812 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:04:46.173887  557812 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:04:46.173891  557812 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:04:46.173896  557812 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:04:46.173899  557812 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:04:46.173903  557812 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:04:46.173906  557812 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:04:46.173916  557812 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:04:46.173926  557812 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:04:46.173930  557812 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:04:46.173933  557812 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:04:46.173936  557812 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:04:46.173939  557812 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:04:46.173945  557812 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:04:46.173949  557812 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:04:46.173953  557812 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:04:46.173956  557812 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:04:46.173960  557812 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:04:46.173964  557812 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:04:46.173967  557812 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:04:46.173970  557812 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:04:46.173972  557812 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:04:46.173975  557812 cri.go:89] found id: ""
	I1014 20:04:46.174034  557812 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:04:46.188891  557812 out.go:203] 
	W1014 20:04:46.191890  557812 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:04:46.191913  557812 out.go:285] * 
	* 
	W1014 20:04:46.199282  557812 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:04:46.202353  557812 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable ingress --alsologtostderr -v=1: exit status 11 (260.073634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:04:46.259770  557856 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:04:46.260400  557856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:04:46.260415  557856 out.go:374] Setting ErrFile to fd 2...
	I1014 20:04:46.260420  557856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:04:46.260800  557856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:04:46.261142  557856 mustload.go:65] Loading cluster: addons-864166
	I1014 20:04:46.261516  557856 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:04:46.261535  557856 addons.go:606] checking whether the cluster is paused
	I1014 20:04:46.261635  557856 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:04:46.261655  557856 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:04:46.262096  557856 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:04:46.283756  557856 ssh_runner.go:195] Run: systemctl --version
	I1014 20:04:46.283816  557856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:04:46.301515  557856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:04:46.402913  557856 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:04:46.403027  557856 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:04:46.433947  557856 cri.go:89] found id: "f0b89e652073d91eb46ee1c85f6a0b6ea3844ead402361d649e30e594ed7dfa1"
	I1014 20:04:46.433968  557856 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:04:46.433973  557856 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:04:46.433977  557856 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:04:46.433980  557856 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:04:46.433984  557856 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:04:46.433987  557856 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:04:46.433989  557856 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:04:46.433992  557856 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:04:46.433998  557856 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:04:46.434002  557856 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:04:46.434005  557856 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:04:46.434008  557856 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:04:46.434011  557856 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:04:46.434015  557856 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:04:46.434020  557856 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:04:46.434023  557856 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:04:46.434027  557856 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:04:46.434031  557856 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:04:46.434034  557856 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:04:46.434038  557856 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:04:46.434041  557856 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:04:46.434044  557856 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:04:46.434047  557856 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:04:46.434051  557856 cri.go:89] found id: ""
	I1014 20:04:46.434111  557856 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:04:46.449618  557856 out.go:203] 
	W1014 20:04:46.452707  557856 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:04:46.452744  557856 out.go:285] * 
	* 
	W1014 20:04:46.459928  557856 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:04:46.462825  557856 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jpdp2" [dc571771-0bb0-456e-b075-946f1b353ae0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.034210804s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (263.583231ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:21.255184  555323 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:21.256042  555323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:21.256107  555323 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:21.256129  555323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:21.256538  555323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:21.256942  555323 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:21.257398  555323 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:21.257432  555323 addons.go:606] checking whether the cluster is paused
	I1014 20:02:21.257629  555323 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:21.257668  555323 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:21.258208  555323 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:21.277409  555323 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:21.277458  555323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:21.299116  555323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:21.403235  555323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:21.403327  555323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:21.434195  555323 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:21.434214  555323 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:21.434219  555323 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:21.434223  555323 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:21.434239  555323 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:21.434243  555323 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:21.434246  555323 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:21.434249  555323 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:21.434253  555323 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:21.434259  555323 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:21.434263  555323 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:21.434266  555323 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:21.434269  555323 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:21.434272  555323 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:21.434274  555323 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:21.434279  555323 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:21.434282  555323 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:21.434286  555323 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:21.434289  555323 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:21.434292  555323 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:21.434296  555323 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:21.434299  555323 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:21.434302  555323 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:21.434305  555323 cri.go:89] found id: ""
	I1014 20:02:21.434354  555323 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:21.449747  555323 out.go:203] 
	W1014 20:02:21.452631  555323 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:21.452661  555323 out.go:285] * 
	* 
	W1014 20:02:21.459784  555323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:21.462759  555323 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.044034ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003483292s
addons_test.go:463: (dbg) Run:  kubectl --context addons-864166 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (395.853111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:15.825968  555234 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:15.827990  555234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:15.828009  555234 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:15.828015  555234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:15.828303  555234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:15.828665  555234 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:15.829035  555234 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:15.829054  555234 addons.go:606] checking whether the cluster is paused
	I1014 20:02:15.829156  555234 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:15.829175  555234 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:15.829608  555234 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:15.866833  555234 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:15.867007  555234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:15.903511  555234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:16.037298  555234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:16.037397  555234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:16.103502  555234 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:16.103521  555234 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:16.103526  555234 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:16.103529  555234 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:16.103532  555234 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:16.103536  555234 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:16.103539  555234 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:16.103543  555234 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:16.103546  555234 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:16.103556  555234 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:16.103559  555234 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:16.103562  555234 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:16.103566  555234 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:16.103569  555234 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:16.103572  555234 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:16.103579  555234 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:16.103582  555234 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:16.103587  555234 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:16.103590  555234 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:16.103594  555234 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:16.103598  555234 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:16.103601  555234 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:16.103604  555234 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:16.103607  555234 cri.go:89] found id: ""
	I1014 20:02:16.103655  555234 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:16.142798  555234 out.go:203] 
	W1014 20:02:16.149313  555234 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:16.149412  555234 out.go:285] * 
	* 
	W1014 20:02:16.157534  555234 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:16.164264  555234 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 20:01:58.670440  546910 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 20:01:58.674799  546910 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 20:01:58.674831  546910 kapi.go:107] duration metric: took 4.41246ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.422413ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-864166 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-864166 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [49eab3ce-ca4a-4c27-8941-1ec3fdae2967] Pending
helpers_test.go:352: "task-pv-pod" [49eab3ce-ca4a-4c27-8941-1ec3fdae2967] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [49eab3ce-ca4a-4c27-8941-1ec3fdae2967] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.087276874s
addons_test.go:572: (dbg) Run:  kubectl --context addons-864166 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-864166 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-864166 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-864166 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-864166 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-864166 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-864166 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [eade5d74-dd59-417a-8657-51c2549ca73a] Pending
helpers_test.go:352: "task-pv-pod-restore" [eade5d74-dd59-417a-8657-51c2549ca73a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [eade5d74-dd59-417a-8657-51c2549ca73a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003667588s
addons_test.go:614: (dbg) Run:  kubectl --context addons-864166 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-864166 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-864166 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (256.558524ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:50.732952  556106 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:50.733840  556106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:50.733884  556106 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:50.733905  556106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:50.734264  556106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:50.734618  556106 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:50.735014  556106 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:50.735065  556106 addons.go:606] checking whether the cluster is paused
	I1014 20:02:50.735192  556106 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:50.735233  556106 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:50.735714  556106 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:50.756257  556106 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:50.756341  556106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:50.775048  556106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:50.874999  556106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:50.875141  556106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:50.905561  556106 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:50.905592  556106 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:50.905597  556106 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:50.905602  556106 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:50.905605  556106 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:50.905608  556106 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:50.905612  556106 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:50.905620  556106 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:50.905624  556106 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:50.905631  556106 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:50.905634  556106 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:50.905637  556106 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:50.905649  556106 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:50.905657  556106 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:50.905660  556106 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:50.905672  556106 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:50.905679  556106 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:50.905683  556106 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:50.905688  556106 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:50.905691  556106 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:50.905696  556106 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:50.905703  556106 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:50.905707  556106 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:50.905711  556106 cri.go:89] found id: ""
	I1014 20:02:50.905762  556106 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:50.921330  556106 out.go:203] 
	W1014 20:02:50.924374  556106 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:50.924402  556106 out.go:285] * 
	* 
	W1014 20:02:50.931524  556106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:50.934608  556106 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (271.092988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:02:50.994394  556149 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:02:50.995187  556149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:50.995204  556149 out.go:374] Setting ErrFile to fd 2...
	I1014 20:02:50.995211  556149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:02:50.995502  556149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:02:50.995810  556149 mustload.go:65] Loading cluster: addons-864166
	I1014 20:02:50.996226  556149 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:50.996242  556149 addons.go:606] checking whether the cluster is paused
	I1014 20:02:50.996378  556149 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:02:50.996400  556149 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:02:50.996839  556149 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:02:51.016513  556149 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:51.016580  556149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:02:51.035510  556149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:02:51.147733  556149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:51.147931  556149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:51.180087  556149 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:02:51.180156  556149 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:02:51.180176  556149 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:02:51.180196  556149 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:02:51.180231  556149 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:02:51.180252  556149 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:02:51.180269  556149 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:02:51.180286  556149 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:02:51.180307  556149 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:02:51.180384  556149 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:02:51.180397  556149 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:02:51.180402  556149 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:02:51.180405  556149 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:02:51.180409  556149 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:02:51.180412  556149 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:02:51.180419  556149 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:02:51.180422  556149 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:02:51.180428  556149 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:02:51.180431  556149 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:02:51.180453  556149 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:02:51.180460  556149 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:02:51.180474  556149 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:02:51.180477  556149 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:02:51.180480  556149 cri.go:89] found id: ""
	I1014 20:02:51.180544  556149 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:02:51.195522  556149 out.go:203] 
	W1014 20:02:51.198340  556149 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:02:51.198364  556149 out.go:285] * 
	* 
	W1014 20:02:51.205625  556149 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:02:51.208484  556149 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (52.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-864166 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-864166 --alsologtostderr -v=1: exit status 11 (256.025327ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:01:55.565239  554286 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:01:55.566243  554286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:55.566291  554286 out.go:374] Setting ErrFile to fd 2...
	I1014 20:01:55.566313  554286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:55.566624  554286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:01:55.566997  554286 mustload.go:65] Loading cluster: addons-864166
	I1014 20:01:55.567414  554286 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:55.567458  554286 addons.go:606] checking whether the cluster is paused
	I1014 20:01:55.567615  554286 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:55.567660  554286 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:01:55.568161  554286 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:01:55.586265  554286 ssh_runner.go:195] Run: systemctl --version
	I1014 20:01:55.586323  554286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:01:55.605139  554286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:01:55.707082  554286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:01:55.707191  554286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:01:55.736436  554286 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:01:55.736461  554286 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:01:55.736466  554286 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:01:55.736469  554286 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:01:55.736472  554286 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:01:55.736476  554286 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:01:55.736479  554286 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:01:55.736482  554286 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:01:55.736486  554286 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:01:55.736497  554286 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:01:55.736500  554286 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:01:55.736503  554286 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:01:55.736507  554286 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:01:55.736510  554286 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:01:55.736513  554286 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:01:55.736518  554286 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:01:55.736524  554286 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:01:55.736528  554286 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:01:55.736531  554286 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:01:55.736534  554286 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:01:55.736538  554286 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:01:55.736542  554286 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:01:55.736545  554286 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:01:55.736548  554286 cri.go:89] found id: ""
	I1014 20:01:55.736600  554286 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:01:55.751567  554286 out.go:203] 
	W1014 20:01:55.754360  554286 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:01:55.754385  554286 out.go:285] * 
	* 
	W1014 20:01:55.761671  554286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:01:55.764459  554286 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-864166 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-864166
helpers_test.go:243: (dbg) docker inspect addons-864166:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90",
	        "Created": "2025-10-14T19:59:08.394875124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 548071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:59:08.46061074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/hostname",
	        "HostsPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/hosts",
	        "LogPath": "/var/lib/docker/containers/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90-json.log",
	        "Name": "/addons-864166",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-864166:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-864166",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90",
	                "LowerDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f2a88b10d125f6aaea517e43a2e0800d882f5b574df3ba7c57740ffc15694c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-864166",
	                "Source": "/var/lib/docker/volumes/addons-864166/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-864166",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-864166",
	                "name.minikube.sigs.k8s.io": "addons-864166",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5a04316aaeb19d26e3e2d0bf6f4cd3f78ecd0d6fe86a3b2acf28a7fff5b609",
	            "SandboxKey": "/var/run/docker/netns/be5a04316aae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-864166": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:67:89:2e:6e:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3a58de1d4bdf03bb1ca046984c4ec6e75aca4aca92e4ea3848e0b272acbfbba",
	                    "EndpointID": "e96875bb542fc4d29234fbee842e81794c3f7af944116a2ac8105d4873e2201e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-864166",
	                        "7451099bfbc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-864166 -n addons-864166
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-864166 logs -n 25: (1.397851421s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-929479 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-929479   │ jenkins │ v1.37.0 │ 14 Oct 25 19:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ delete  │ -p download-only-929479                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-929479   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ start   │ -o=json --download-only -p download-only-571258 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-571258   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ delete  │ -p download-only-571258                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-571258   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ delete  │ -p download-only-929479                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-929479   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ delete  │ -p download-only-571258                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-571258   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ start   │ --download-only -p download-docker-282779 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-282779 │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ delete  │ -p download-docker-282779                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-282779 │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ start   │ --download-only -p binary-mirror-998704 --alsologtostderr --binary-mirror http://127.0.0.1:42403 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-998704   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ delete  │ -p binary-mirror-998704                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-998704   │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p addons-864166                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ addons  │ disable dashboard -p addons-864166                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	│ start   │ -p addons-864166 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 20:01 UTC │
	│ addons  │ addons-864166 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-864166 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	│ addons  │ enable headlamp -p addons-864166 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-864166          │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:58:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:58:41.639027  547669 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:58:41.639137  547669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:41.639146  547669 out.go:374] Setting ErrFile to fd 2...
	I1014 19:58:41.639151  547669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:41.639412  547669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 19:58:41.639849  547669 out.go:368] Setting JSON to false
	I1014 19:58:41.640726  547669 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9672,"bootTime":1760462250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 19:58:41.640797  547669 start.go:141] virtualization:  
	I1014 19:58:41.644186  547669 out.go:179] * [addons-864166] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 19:58:41.648066  547669 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:58:41.648121  547669 notify.go:220] Checking for updates...
	I1014 19:58:41.653788  547669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:58:41.656654  547669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:58:41.659504  547669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 19:58:41.662424  547669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 19:58:41.665236  547669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:58:41.668299  547669 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:58:41.691761  547669 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 19:58:41.691890  547669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:41.755907  547669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:41.746778275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:41.756012  547669 docker.go:318] overlay module found
	I1014 19:58:41.759164  547669 out.go:179] * Using the docker driver based on user configuration
	I1014 19:58:41.761976  547669 start.go:305] selected driver: docker
	I1014 19:58:41.761996  547669 start.go:925] validating driver "docker" against <nil>
	I1014 19:58:41.762011  547669 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:58:41.762743  547669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:41.816842  547669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:41.808180497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:41.817000  547669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:58:41.817237  547669 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:58:41.820194  547669 out.go:179] * Using Docker driver with root privileges
	I1014 19:58:41.823068  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:58:41.823143  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:58:41.823161  547669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:58:41.823246  547669 start.go:349] cluster config:
	{Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1014 19:58:41.828122  547669 out.go:179] * Starting "addons-864166" primary control-plane node in "addons-864166" cluster
	I1014 19:58:41.830983  547669 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:58:41.833899  547669 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:58:41.836757  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:58:41.836813  547669 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 19:58:41.836827  547669 cache.go:58] Caching tarball of preloaded images
	I1014 19:58:41.836838  547669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:58:41.836925  547669 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 19:58:41.836935  547669 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:58:41.837272  547669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json ...
	I1014 19:58:41.837302  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json: {Name:mkd4c5f5a8f0cbd8c057a027d6233016271076f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:58:41.852403  547669 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:58:41.852551  547669 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:58:41.852574  547669 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1014 19:58:41.852582  547669 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1014 19:58:41.852590  547669 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1014 19:58:41.852596  547669 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1014 19:58:59.919684  547669 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1014 19:58:59.919722  547669 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:58:59.919751  547669 start.go:360] acquireMachinesLock for addons-864166: {Name:mk2ce089a8fc2998b0a1e4a216c7aeae941b6849 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:58:59.919867  547669 start.go:364] duration metric: took 94.942µs to acquireMachinesLock for "addons-864166"
	I1014 19:58:59.919900  547669 start.go:93] Provisioning new machine with config: &{Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:58:59.919989  547669 start.go:125] createHost starting for "" (driver="docker")
	I1014 19:58:59.923423  547669 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1014 19:58:59.923667  547669 start.go:159] libmachine.API.Create for "addons-864166" (driver="docker")
	I1014 19:58:59.923710  547669 client.go:168] LocalClient.Create starting
	I1014 19:58:59.923819  547669 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 19:59:01.300154  547669 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 19:59:01.593064  547669 cli_runner.go:164] Run: docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 19:59:01.609867  547669 cli_runner.go:211] docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 19:59:01.609965  547669 network_create.go:284] running [docker network inspect addons-864166] to gather additional debugging logs...
	I1014 19:59:01.609989  547669 cli_runner.go:164] Run: docker network inspect addons-864166
	W1014 19:59:01.625983  547669 cli_runner.go:211] docker network inspect addons-864166 returned with exit code 1
	I1014 19:59:01.626016  547669 network_create.go:287] error running [docker network inspect addons-864166]: docker network inspect addons-864166: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-864166 not found
	I1014 19:59:01.626046  547669 network_create.go:289] output of [docker network inspect addons-864166]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-864166 not found
	
	** /stderr **
	I1014 19:59:01.626145  547669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:59:01.641333  547669 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a03c10}
	I1014 19:59:01.641384  547669 network_create.go:124] attempt to create docker network addons-864166 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 19:59:01.641437  547669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-864166 addons-864166
	I1014 19:59:01.691182  547669 network_create.go:108] docker network addons-864166 192.168.49.0/24 created
	I1014 19:59:01.691236  547669 kic.go:121] calculated static IP "192.168.49.2" for the "addons-864166" container
	I1014 19:59:01.691308  547669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 19:59:01.706081  547669 cli_runner.go:164] Run: docker volume create addons-864166 --label name.minikube.sigs.k8s.io=addons-864166 --label created_by.minikube.sigs.k8s.io=true
	I1014 19:59:01.722403  547669 oci.go:103] Successfully created a docker volume addons-864166
	I1014 19:59:01.722494  547669 cli_runner.go:164] Run: docker run --rm --name addons-864166-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --entrypoint /usr/bin/test -v addons-864166:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 19:59:03.900569  547669 cli_runner.go:217] Completed: docker run --rm --name addons-864166-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --entrypoint /usr/bin/test -v addons-864166:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.178023903s)
	I1014 19:59:03.900599  547669 oci.go:107] Successfully prepared a docker volume addons-864166
	I1014 19:59:03.900627  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:59:03.900650  547669 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 19:59:03.900719  547669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 19:59:08.327862  547669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427089557s)
	I1014 19:59:08.327895  547669 kic.go:203] duration metric: took 4.427245677s to extract preloaded images to volume ...
	W1014 19:59:08.328027  547669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 19:59:08.328135  547669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 19:59:08.380523  547669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-864166 --name addons-864166 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864166 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-864166 --network addons-864166 --ip 192.168.49.2 --volume addons-864166:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 19:59:08.677398  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Running}}
	I1014 19:59:08.700964  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:08.722986  547669 cli_runner.go:164] Run: docker exec addons-864166 stat /var/lib/dpkg/alternatives/iptables
	I1014 19:59:08.775969  547669 oci.go:144] the created container "addons-864166" has a running status.
	I1014 19:59:08.775996  547669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa...
	I1014 19:59:09.019931  547669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 19:59:09.054172  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:09.075760  547669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 19:59:09.075786  547669 kic_runner.go:114] Args: [docker exec --privileged addons-864166 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 19:59:09.150803  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:09.191410  547669 machine.go:93] provisionDockerMachine start ...
	I1014 19:59:09.191527  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:09.211979  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:09.212305  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:09.212487  547669 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:59:09.213174  547669 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37518->127.0.0.1:33507: read: connection reset by peer
	I1014 19:59:12.355908  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864166
	
	I1014 19:59:12.355943  547669 ubuntu.go:182] provisioning hostname "addons-864166"
	I1014 19:59:12.356031  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:12.372774  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:12.373073  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:12.373088  547669 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864166 && echo "addons-864166" | sudo tee /etc/hostname
	I1014 19:59:12.525496  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864166
	
	I1014 19:59:12.525583  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:12.542457  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:12.542770  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:12.542786  547669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864166/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:59:12.688584  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:59:12.688611  547669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 19:59:12.688634  547669 ubuntu.go:190] setting up certificates
	I1014 19:59:12.688662  547669 provision.go:84] configureAuth start
	I1014 19:59:12.688739  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:12.708787  547669 provision.go:143] copyHostCerts
	I1014 19:59:12.708875  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 19:59:12.709011  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 19:59:12.709078  547669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 19:59:12.709142  547669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.addons-864166 san=[127.0.0.1 192.168.49.2 addons-864166 localhost minikube]
	I1014 19:59:13.053424  547669 provision.go:177] copyRemoteCerts
	I1014 19:59:13.053510  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:59:13.053556  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.072784  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.176030  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:59:13.193304  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 19:59:13.210925  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 19:59:13.228157  547669 provision.go:87] duration metric: took 539.461746ms to configureAuth
	I1014 19:59:13.228182  547669 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:59:13.228479  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:13.228596  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.245599  547669 main.go:141] libmachine: Using SSH client type: native
	I1014 19:59:13.245911  547669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33507 <nil> <nil>}
	I1014 19:59:13.245930  547669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:59:13.500936  547669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:59:13.500959  547669 machine.go:96] duration metric: took 4.309519213s to provisionDockerMachine
	I1014 19:59:13.500974  547669 client.go:171] duration metric: took 13.577251094s to LocalClient.Create
	I1014 19:59:13.500987  547669 start.go:167] duration metric: took 13.577321584s to libmachine.API.Create "addons-864166"
	I1014 19:59:13.500994  547669 start.go:293] postStartSetup for "addons-864166" (driver="docker")
	I1014 19:59:13.501004  547669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:59:13.501066  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:59:13.501108  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.524432  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.628667  547669 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:59:13.632079  547669 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:59:13.632156  547669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:59:13.632175  547669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 19:59:13.632258  547669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 19:59:13.632285  547669 start.go:296] duration metric: took 131.285349ms for postStartSetup
	I1014 19:59:13.632612  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:13.653249  547669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/config.json ...
	I1014 19:59:13.653655  547669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:59:13.653717  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.670832  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.769283  547669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:59:13.773992  547669 start.go:128] duration metric: took 13.853987234s to createHost
	I1014 19:59:13.774071  547669 start.go:83] releasing machines lock for "addons-864166", held for 13.854187656s
	I1014 19:59:13.774168  547669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864166
	I1014 19:59:13.790702  547669 ssh_runner.go:195] Run: cat /version.json
	I1014 19:59:13.790754  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.791067  547669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:59:13.791132  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:13.812461  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.834328  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:13.915870  547669 ssh_runner.go:195] Run: systemctl --version
	I1014 19:59:14.007076  547669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:59:14.045127  547669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:59:14.049555  547669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:59:14.049635  547669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:59:14.085636  547669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 19:59:14.085671  547669 start.go:495] detecting cgroup driver to use...
	I1014 19:59:14.085728  547669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 19:59:14.085808  547669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:59:14.103676  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:59:14.116279  547669 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:59:14.116421  547669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:59:14.134445  547669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:59:14.153402  547669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:59:14.269201  547669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:59:14.395235  547669 docker.go:234] disabling docker service ...
	I1014 19:59:14.395340  547669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:59:14.415427  547669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:59:14.428068  547669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:59:14.545170  547669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:59:14.669849  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:59:14.682688  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:59:14.696343  547669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:59:14.696411  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.704854  547669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 19:59:14.704920  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.713458  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.721575  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.730394  547669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:59:14.738141  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.746950  547669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.760124  547669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:59:14.768744  547669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:59:14.776449  547669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:59:14.783629  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:14.898903  547669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:59:15.040058  547669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:59:15.040183  547669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:59:15.045118  547669 start.go:563] Will wait 60s for crictl version
	I1014 19:59:15.045224  547669 ssh_runner.go:195] Run: which crictl
	I1014 19:59:15.049399  547669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:59:15.075648  547669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:59:15.075806  547669 ssh_runner.go:195] Run: crio --version
	I1014 19:59:15.105597  547669 ssh_runner.go:195] Run: crio --version
	I1014 19:59:15.143960  547669 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:59:15.146850  547669 cli_runner.go:164] Run: docker network inspect addons-864166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:59:15.163084  547669 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:59:15.166914  547669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:59:15.176550  547669 kubeadm.go:883] updating cluster {Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:59:15.176679  547669 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:59:15.176743  547669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:59:15.213395  547669 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:59:15.213420  547669 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:59:15.213478  547669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:59:15.238770  547669 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:59:15.238794  547669 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:59:15.238802  547669 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 19:59:15.238886  547669 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-864166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:59:15.238971  547669 ssh_runner.go:195] Run: crio config
	I1014 19:59:15.300074  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:59:15.300138  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:59:15.300171  547669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:59:15.300217  547669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864166 NodeName:addons-864166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:59:15.300385  547669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:59:15.300472  547669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:59:15.308570  547669 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:59:15.308646  547669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:59:15.315898  547669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 19:59:15.328252  547669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:59:15.341630  547669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1014 19:59:15.354557  547669 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:59:15.358490  547669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:59:15.368048  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:15.484177  547669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:59:15.499729  547669 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166 for IP: 192.168.49.2
	I1014 19:59:15.499802  547669 certs.go:195] generating shared ca certs ...
	I1014 19:59:15.499832  547669 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.500024  547669 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 19:59:15.872884  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt ...
	I1014 19:59:15.872917  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt: {Name:mk9472c60fe4424db2314e7a3ea9aad0a426d75b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.873142  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key ...
	I1014 19:59:15.873158  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key: {Name:mkb78757b13d34b18d9982ea6de4fd9ce3851ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:15.873247  547669 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 19:59:16.723837  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt ...
	I1014 19:59:16.723872  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt: {Name:mkdc527181622d5081b41fd495a359446f4796c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.724071  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key ...
	I1014 19:59:16.724087  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key: {Name:mkf23a0a6ec0a92f1f62c58c46012f7c62f6dad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.724167  547669 certs.go:257] generating profile certs ...
	I1014 19:59:16.724228  547669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key
	I1014 19:59:16.724247  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt with IP's: []
	I1014 19:59:16.989230  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt ...
	I1014 19:59:16.989263  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: {Name:mk6c988c6bbb32f8bb4dffa2c3a7504ae056cad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.989447  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key ...
	I1014 19:59:16.989459  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.key: {Name:mk5141325328a4872e465d6be680ceb5e65afb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:16.989567  547669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d
	I1014 19:59:16.989590  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 19:59:18.483070  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d ...
	I1014 19:59:18.483103  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d: {Name:mk522b09f50d7966504b93cd25c94d3e85aef632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.483293  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d ...
	I1014 19:59:18.483309  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d: {Name:mkc9c92cee1a5d5d37a42bba3c89d3e99be6c892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.483381  547669 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt.44cc0e5d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt
	I1014 19:59:18.483464  547669 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key.44cc0e5d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key
	I1014 19:59:18.483519  547669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key
	I1014 19:59:18.483539  547669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt with IP's: []
	I1014 19:59:18.725124  547669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt ...
	I1014 19:59:18.725158  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt: {Name:mk3d6a277197517bf303b2714e4d53b14bbdac31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.725364  547669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key ...
	I1014 19:59:18.725379  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key: {Name:mkdadd69d020076dbd43dd35a7e051a8522b21d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:18.725566  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 19:59:18.725608  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:59:18.725632  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:59:18.725660  547669 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 19:59:18.726322  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:59:18.745209  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:59:18.763610  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:59:18.781371  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 19:59:18.797881  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 19:59:18.815543  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 19:59:18.832269  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:59:18.850689  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 19:59:18.867223  547669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:59:18.884365  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:59:18.896795  547669 ssh_runner.go:195] Run: openssl version
	I1014 19:59:18.902772  547669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:59:18.911286  547669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.914893  547669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.914973  547669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:59:18.955409  547669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:59:18.963541  547669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:59:18.966911  547669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 19:59:18.966958  547669 kubeadm.go:400] StartCluster: {Name:addons-864166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:18.967046  547669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:59:18.967113  547669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:59:18.993530  547669 cri.go:89] found id: ""
	I1014 19:59:18.993653  547669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:59:19.002401  547669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:59:19.010550  547669 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:59:19.010638  547669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:59:19.018345  547669 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:59:19.018364  547669 kubeadm.go:157] found existing configuration files:
	
	I1014 19:59:19.018438  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 19:59:19.026194  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:59:19.026296  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:59:19.033625  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 19:59:19.042128  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:59:19.042193  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:59:19.049355  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 19:59:19.056798  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:59:19.056879  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:59:19.064261  547669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 19:59:19.071893  547669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:59:19.071974  547669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:59:19.079243  547669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:59:19.119286  547669 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:59:19.119352  547669 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:59:19.145863  547669 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:59:19.145945  547669 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 19:59:19.145987  547669 kubeadm.go:318] OS: Linux
	I1014 19:59:19.146040  547669 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:59:19.146095  547669 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 19:59:19.146149  547669 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:59:19.146203  547669 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:59:19.146258  547669 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:59:19.146346  547669 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:59:19.146399  547669 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:59:19.146455  547669 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:59:19.146508  547669 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 19:59:19.219512  547669 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:59:19.219632  547669 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:59:19.219731  547669 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:59:19.234905  547669 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:59:19.242087  547669 out.go:252]   - Generating certificates and keys ...
	I1014 19:59:19.242209  547669 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:59:19.242303  547669 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:59:19.474848  547669 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 19:59:20.769631  547669 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 19:59:21.726690  547669 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 19:59:22.117279  547669 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 19:59:23.468958  547669 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 19:59:23.469115  547669 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-864166 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:59:23.847938  547669 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 19:59:23.848166  547669 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-864166 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:59:25.035348  547669 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 19:59:25.493901  547669 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 19:59:25.967346  547669 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 19:59:25.967465  547669 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:59:26.925869  547669 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:59:27.413576  547669 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:59:27.931304  547669 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:59:29.315370  547669 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:59:29.503138  547669 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:59:29.503862  547669 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:59:29.507550  547669 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:59:29.511077  547669 out.go:252]   - Booting up control plane ...
	I1014 19:59:29.511204  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:59:29.511295  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:59:29.512185  547669 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:59:29.527162  547669 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:59:29.527280  547669 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:59:29.534864  547669 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:59:29.535190  547669 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:59:29.535459  547669 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:59:29.677045  547669 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:59:29.677165  547669 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:59:30.674518  547669 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002141436s
	I1014 19:59:30.678318  547669 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:59:30.678419  547669 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 19:59:30.678514  547669 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:59:30.678596  547669 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:59:34.557897  547669 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.878613614s
	I1014 19:59:35.497595  547669 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.819245323s
	I1014 19:59:37.180238  547669 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501848s
	I1014 19:59:37.204739  547669 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 19:59:37.218127  547669 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 19:59:37.233348  547669 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 19:59:37.233694  547669 kubeadm.go:318] [mark-control-plane] Marking the node addons-864166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 19:59:37.246389  547669 kubeadm.go:318] [bootstrap-token] Using token: d5p9p5.n1skb9g2ibxk9cjj
	I1014 19:59:37.249498  547669 out.go:252]   - Configuring RBAC rules ...
	I1014 19:59:37.249647  547669 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 19:59:37.258531  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 19:59:37.269553  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 19:59:37.274012  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 19:59:37.278409  547669 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 19:59:37.282792  547669 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 19:59:37.588009  547669 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 19:59:38.046415  547669 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 19:59:38.587689  547669 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 19:59:38.588852  547669 kubeadm.go:318] 
	I1014 19:59:38.588933  547669 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 19:59:38.588940  547669 kubeadm.go:318] 
	I1014 19:59:38.589016  547669 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 19:59:38.589021  547669 kubeadm.go:318] 
	I1014 19:59:38.589046  547669 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 19:59:38.589104  547669 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 19:59:38.589154  547669 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 19:59:38.589159  547669 kubeadm.go:318] 
	I1014 19:59:38.589212  547669 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 19:59:38.589217  547669 kubeadm.go:318] 
	I1014 19:59:38.589264  547669 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 19:59:38.589269  547669 kubeadm.go:318] 
	I1014 19:59:38.589321  547669 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 19:59:38.589396  547669 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 19:59:38.589463  547669 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 19:59:38.589468  547669 kubeadm.go:318] 
	I1014 19:59:38.589726  547669 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 19:59:38.589820  547669 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 19:59:38.589831  547669 kubeadm.go:318] 
	I1014 19:59:38.589920  547669 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d5p9p5.n1skb9g2ibxk9cjj \
	I1014 19:59:38.590034  547669 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 19:59:38.590061  547669 kubeadm.go:318] 	--control-plane 
	I1014 19:59:38.590070  547669 kubeadm.go:318] 
	I1014 19:59:38.590167  547669 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 19:59:38.590177  547669 kubeadm.go:318] 
	I1014 19:59:38.590269  547669 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d5p9p5.n1skb9g2ibxk9cjj \
	I1014 19:59:38.590381  547669 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 19:59:38.593580  547669 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 19:59:38.593891  547669 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 19:59:38.594027  547669 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:59:38.594045  547669 cni.go:84] Creating CNI manager for ""
	I1014 19:59:38.594054  547669 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:59:38.597271  547669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 19:59:38.600267  547669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 19:59:38.604296  547669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 19:59:38.604339  547669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 19:59:38.616972  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 19:59:38.906705  547669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 19:59:38.906844  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:38.906919  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864166 minikube.k8s.io/updated_at=2025_10_14T19_59_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=addons-864166 minikube.k8s.io/primary=true
	I1014 19:59:39.049350  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:39.049415  547669 ops.go:34] apiserver oom_adj: -16
	I1014 19:59:39.550017  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:40.050031  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:40.550325  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:41.049513  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:41.549510  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:42.049451  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:42.550080  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:43.050280  547669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:59:43.198826  547669 kubeadm.go:1113] duration metric: took 4.292022008s to wait for elevateKubeSystemPrivileges
	I1014 19:59:43.198852  547669 kubeadm.go:402] duration metric: took 24.23189744s to StartCluster
	I1014 19:59:43.198869  547669 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:43.198979  547669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:59:43.199380  547669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:59:43.199599  547669 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:59:43.199773  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 19:59:43.200018  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:43.200050  547669 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 19:59:43.200119  547669 addons.go:69] Setting yakd=true in profile "addons-864166"
	I1014 19:59:43.200136  547669 addons.go:238] Setting addon yakd=true in "addons-864166"
	I1014 19:59:43.200158  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.200667  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.200854  547669 addons.go:69] Setting inspektor-gadget=true in profile "addons-864166"
	I1014 19:59:43.200868  547669 addons.go:238] Setting addon inspektor-gadget=true in "addons-864166"
	I1014 19:59:43.200888  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.201273  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.201634  547669 addons.go:69] Setting metrics-server=true in profile "addons-864166"
	I1014 19:59:43.201663  547669 addons.go:238] Setting addon metrics-server=true in "addons-864166"
	I1014 19:59:43.201704  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.202112  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.202394  547669 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-864166"
	I1014 19:59:43.202410  547669 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-864166"
	I1014 19:59:43.202433  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.202834  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.208200  547669 addons.go:69] Setting cloud-spanner=true in profile "addons-864166"
	I1014 19:59:43.208234  547669 addons.go:238] Setting addon cloud-spanner=true in "addons-864166"
	I1014 19:59:43.208377  547669 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864166"
	I1014 19:59:43.208399  547669 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-864166"
	I1014 19:59:43.208430  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.208521  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.208897  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.210849  547669 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864166"
	I1014 19:59:43.210911  547669 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-864166"
	I1014 19:59:43.210937  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.211344  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.218540  547669 addons.go:69] Setting registry=true in profile "addons-864166"
	I1014 19:59:43.218577  547669 addons.go:238] Setting addon registry=true in "addons-864166"
	I1014 19:59:43.218624  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.219175  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.219536  547669 addons.go:69] Setting default-storageclass=true in profile "addons-864166"
	I1014 19:59:43.219556  547669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864166"
	I1014 19:59:43.219832  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.234893  547669 addons.go:69] Setting registry-creds=true in profile "addons-864166"
	I1014 19:59:43.234927  547669 addons.go:238] Setting addon registry-creds=true in "addons-864166"
	I1014 19:59:43.234961  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.235424  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.238040  547669 addons.go:69] Setting gcp-auth=true in profile "addons-864166"
	I1014 19:59:43.238078  547669 mustload.go:65] Loading cluster: addons-864166
	I1014 19:59:43.238301  547669 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:43.238581  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.254803  547669 addons.go:69] Setting ingress=true in profile "addons-864166"
	I1014 19:59:43.254835  547669 addons.go:238] Setting addon ingress=true in "addons-864166"
	I1014 19:59:43.254882  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.255338  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.255622  547669 addons.go:69] Setting storage-provisioner=true in profile "addons-864166"
	I1014 19:59:43.255639  547669 addons.go:238] Setting addon storage-provisioner=true in "addons-864166"
	I1014 19:59:43.255670  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.256071  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.269239  547669 addons.go:69] Setting ingress-dns=true in profile "addons-864166"
	I1014 19:59:43.269273  547669 addons.go:238] Setting addon ingress-dns=true in "addons-864166"
	I1014 19:59:43.269329  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.269793  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.272419  547669 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864166"
	I1014 19:59:43.272452  547669 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864166"
	I1014 19:59:43.272775  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.295736  547669 out.go:179] * Verifying Kubernetes components...
	I1014 19:59:43.298439  547669 addons.go:69] Setting volcano=true in profile "addons-864166"
	I1014 19:59:43.298472  547669 addons.go:238] Setting addon volcano=true in "addons-864166"
	I1014 19:59:43.298517  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.298992  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.299173  547669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:59:43.339011  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.345713  547669 addons.go:69] Setting volumesnapshots=true in profile "addons-864166"
	I1014 19:59:43.345743  547669 addons.go:238] Setting addon volumesnapshots=true in "addons-864166"
	I1014 19:59:43.345788  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.356470  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.469560  547669 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 19:59:43.502770  547669 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1014 19:59:43.507176  547669 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 19:59:43.507207  547669 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1014 19:59:43.507289  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.507461  547669 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1014 19:59:43.510389  547669 out.go:179]   - Using image docker.io/registry:3.0.0
	I1014 19:59:43.512739  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 19:59:43.512800  547669 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 19:59:43.512918  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.540421  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 19:59:43.540566  547669 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1014 19:59:43.540616  547669 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 19:59:43.543353  547669 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:59:43.543418  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 19:59:43.543502  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.545132  547669 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:59:43.545152  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 19:59:43.545221  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.555465  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 19:59:43.555484  547669 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 19:59:43.555544  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.557518  547669 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-864166"
	I1014 19:59:43.557568  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.558026  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.583498  547669 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1014 19:59:43.588488  547669 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:59:43.588513  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1014 19:59:43.588580  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.609376  547669 addons.go:238] Setting addon default-storageclass=true in "addons-864166"
	I1014 19:59:43.609476  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.610125  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:43.610581  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 19:59:43.610855  547669 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1014 19:59:43.610986  547669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:59:43.611793  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:43.653661  547669 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1014 19:59:43.654689  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1014 19:59:43.611976  547669 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 19:59:43.623340  547669 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1014 19:59:43.655931  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1014 19:59:43.623414  547669 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:59:43.623536  547669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:59:43.656731  547669 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 19:59:43.668381  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.670423  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 19:59:43.673683  547669 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1014 19:59:43.681152  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 19:59:43.681422  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.681863  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1014 19:59:43.681958  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.689735  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.692460  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:59:43.700111  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.692470  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 19:59:43.701094  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.692771  547669 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 19:59:43.709785  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.716708  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.693670  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 19:59:43.699693  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:43.728780  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:43.731677  547669 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:59:43.731697  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 19:59:43.731758  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.735975  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 19:59:43.737914  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.742041  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 19:59:43.745032  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 19:59:43.747820  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 19:59:43.756412  547669 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 19:59:43.759297  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 19:59:43.759324  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 19:59:43.759396  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.771006  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.779320  547669 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 19:59:43.786927  547669 out.go:179]   - Using image docker.io/busybox:stable
	I1014 19:59:43.795128  547669 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:59:43.795153  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 19:59:43.795218  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.845802  547669 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:59:43.845823  547669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:59:43.845883  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:43.848641  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.854743  547669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 19:59:43.855497  547669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:59:43.899591  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.900122  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.902717  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.911427  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.922101  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.936788  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.942985  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:43.948536  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	W1014 19:59:43.949401  547669 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 19:59:43.949435  547669 retry.go:31] will retry after 334.62549ms: ssh: handshake failed: EOF
	I1014 19:59:43.978747  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	W1014 19:59:43.980658  547669 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 19:59:43.980680  547669 retry.go:31] will retry after 303.149886ms: ssh: handshake failed: EOF
	I1014 19:59:44.298367  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:59:44.359247  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:59:44.393841  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 19:59:44.393906  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 19:59:44.449204  547669 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:44.449268  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1014 19:59:44.467288  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:59:44.480018  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 19:59:44.480089  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 19:59:44.520293  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 19:59:44.520379  547669 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 19:59:44.524262  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:59:44.529522  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:59:44.551718  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 19:59:44.551793  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 19:59:44.589328  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:59:44.601717  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 19:59:44.601779  547669 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 19:59:44.625299  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:59:44.630524  547669 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 19:59:44.630560  547669 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 19:59:44.637997  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 19:59:44.642869  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 19:59:44.642890  547669 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 19:59:44.645171  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:44.686949  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:59:44.714979  547669 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:59:44.715045  547669 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 19:59:44.717575  547669 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 19:59:44.717633  547669 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 19:59:44.761335  547669 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:59:44.761405  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 19:59:44.770695  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 19:59:44.770759  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 19:59:44.795637  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 19:59:44.795709  547669 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 19:59:44.896374  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 19:59:44.896445  547669 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 19:59:44.930521  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:59:45.003450  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:59:45.042426  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 19:59:45.042514  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 19:59:45.211484  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 19:59:45.211577  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 19:59:45.236994  547669 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:59:45.237067  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 19:59:45.247082  547669 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:45.247169  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 19:59:45.367959  547669 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.513180877s)
	I1014 19:59:45.368131  547669 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1014 19:59:45.368054  547669 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.512536322s)
	I1014 19:59:45.368962  547669 node_ready.go:35] waiting up to 6m0s for node "addons-864166" to be "Ready" ...
	I1014 19:59:45.368092  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.069664762s)
	I1014 19:59:45.502955  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:59:45.545961  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 19:59:45.546033  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 19:59:45.549645  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:45.685871  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.326553645s)
	I1014 19:59:45.789942  547669 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 19:59:45.790009  547669 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 19:59:45.874855  547669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864166" context rescaled to 1 replicas
	I1014 19:59:46.158532  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 19:59:46.158610  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 19:59:46.343239  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 19:59:46.343312  547669 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 19:59:46.502294  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 19:59:46.502367  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 19:59:46.822199  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 19:59:46.822280  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 19:59:47.014149  547669 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 19:59:47.014226  547669 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 19:59:47.297628  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1014 19:59:47.394551  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:49.319285  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.851901904s)
	I1014 19:59:49.319371  547669 addons.go:479] Verifying addon ingress=true in "addons-864166"
	I1014 19:59:49.319605  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.795156553s)
	I1014 19:59:49.319667  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.790079723s)
	I1014 19:59:49.319810  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.69444502s)
	I1014 19:59:49.319859  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.681838155s)
	I1014 19:59:49.319921  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.674722756s)
	W1014 19:59:49.319937  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:49.319950  547669 retry.go:31] will retry after 224.113929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:49.319977  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.6329595s)
	I1014 19:59:49.320177  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.389583464s)
	I1014 19:59:49.320190  547669 addons.go:479] Verifying addon registry=true in "addons-864166"
	I1014 19:59:49.320296  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.730352152s)
	I1014 19:59:49.320675  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.317134377s)
	I1014 19:59:49.320698  547669 addons.go:479] Verifying addon metrics-server=true in "addons-864166"
	I1014 19:59:49.320735  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.817704949s)
	I1014 19:59:49.322849  547669 out.go:179] * Verifying ingress addon...
	I1014 19:59:49.324849  547669 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864166 service yakd-dashboard -n yakd-dashboard
	
	I1014 19:59:49.324916  547669 out.go:179] * Verifying registry addon...
	I1014 19:59:49.327522  547669 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 19:59:49.330619  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 19:59:49.335682  547669 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 19:59:49.335709  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:49.337788  547669 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 19:59:49.337813  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:49.342388  547669 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1014 19:59:49.359924  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.81019441s)
	W1014 19:59:49.359964  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:59:49.359983  547669 retry.go:31] will retry after 137.53807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:59:49.498355  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:59:49.544348  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:49.840285  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:49.843042  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:49.876775  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:49.905458  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.607741773s)
	I1014 19:59:49.905502  547669 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-864166"
	I1014 19:59:49.910588  547669 out.go:179] * Verifying csi-hostpath-driver addon...
	I1014 19:59:49.913777  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 19:59:49.923092  547669 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 19:59:49.923114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:50.330781  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:50.333012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:50.416841  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:50.794612  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250219191s)
	W1014 19:59:50.794646  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:50.794666  547669 retry.go:31] will retry after 558.61315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:50.831129  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:50.833114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:50.931921  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:51.262156  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 19:59:51.262238  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:51.278982  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:51.331984  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:51.334364  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:51.354121  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:51.396515  547669 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 19:59:51.418286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:51.421175  547669 addons.go:238] Setting addon gcp-auth=true in "addons-864166"
	I1014 19:59:51.421263  547669 host.go:66] Checking if "addons-864166" exists ...
	I1014 19:59:51.421729  547669 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 19:59:51.446239  547669 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 19:59:51.446305  547669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 19:59:51.469365  547669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 19:59:51.832601  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:51.833655  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:51.917400  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:59:52.188260  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:52.188294  547669 retry.go:31] will retry after 315.178256ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:52.191448  547669 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:59:52.194227  547669 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 19:59:52.197011  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 19:59:52.197032  547669 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 19:59:52.210964  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 19:59:52.210985  547669 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 19:59:52.225039  547669 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:59:52.225061  547669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 19:59:52.239242  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:59:52.331673  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:52.333531  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:52.372410  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:52.417116  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:52.504241  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:52.779941  547669 addons.go:479] Verifying addon gcp-auth=true in "addons-864166"
	I1014 19:59:52.782963  547669 out.go:179] * Verifying gcp-auth addon...
	I1014 19:59:52.787341  547669 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 19:59:52.796877  547669 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 19:59:52.796899  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:52.897484  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:52.897702  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:52.917329  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:53.290809  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:53.331645  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:53.334292  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:53.402964  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:53.402996  547669 retry.go:31] will retry after 458.066972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:53.417154  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:53.795812  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:53.830832  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:53.833333  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:53.861620  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:53.918119  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:54.291198  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:54.331728  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:54.333881  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:54.373593  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:54.417512  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:59:54.678096  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:54.678171  547669 retry.go:31] will retry after 1.852442892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:54.791016  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:54.831433  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:54.834058  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:54.916803  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:55.290719  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:55.330801  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:55.333098  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:55.416537  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:55.790807  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:55.830573  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:55.834002  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:55.917307  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:56.290657  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:56.331380  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:56.333230  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:56.417338  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:56.531506  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:56.791246  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:56.831822  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:56.834451  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:56.872944  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:56.917560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:57.291840  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1014 19:59:57.328945  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:57.329018  547669 retry.go:31] will retry after 2.493328732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:59:57.331094  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:57.333532  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:57.417297  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:57.790808  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:57.830536  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:57.833813  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:57.918330  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:58.290609  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:58.330391  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:58.333824  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:58.416839  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:58.791358  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:58.831261  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:58.834925  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:58.916868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:59.290737  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:59.331107  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:59.333067  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 19:59:59.371770  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 19:59:59.417984  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:59:59.791112  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:59:59.823570  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:59:59.835325  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:59:59.836122  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:59:59.917735  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:00.295035  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:00.352113  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:00.352280  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:00.425460  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:00.815502  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:00.835063  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:00.838501  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:00.918544  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:01.295281  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:01.345674  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:01.346249  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.522598993s)
	W1014 20:00:01.346285  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:01.346320  547669 retry.go:31] will retry after 2.907557369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:01.359434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:01.402302  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:01.422200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:01.836521  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:01.862388  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:01.862573  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:01.936994  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:02.293610  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:02.333011  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:02.344870  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:02.417357  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:02.792094  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:02.831154  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:02.834074  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:02.916972  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:03.290561  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:03.332189  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:03.334557  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:03.417954  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:03.790411  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:03.831032  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:03.833491  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:03.872295  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:03.917338  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:04.254659  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:04.291386  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:04.331947  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:04.334876  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:04.417095  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:04.791450  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:04.830717  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:04.833168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:04.917801  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:05.089871  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:05.089919  547669 retry.go:31] will retry after 4.213729924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:05.290726  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:05.330665  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:05.333101  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:05.417745  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:05.791792  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:05.831866  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:05.834086  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:05.916846  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:06.291368  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:06.331723  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:06.333566  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:06.372463  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:06.417199  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:06.791008  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:06.831430  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:06.833670  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:06.916926  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:07.290520  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:07.331143  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:07.333384  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:07.423721  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:07.791023  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:07.831193  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:07.834126  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:07.916979  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:08.301396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:08.331507  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:08.333449  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:08.417560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:08.791214  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:08.831543  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:08.833971  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:08.871868  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:08.917886  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:09.291150  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:09.304476  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:09.333022  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:09.336174  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:09.417629  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:09.791224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:09.831723  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:09.834116  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:09.918736  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:10.173609  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:10.173644  547669 retry.go:31] will retry after 3.694244117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:10.290651  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:10.330861  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:10.333444  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:10.417395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:10.790697  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:10.833105  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:10.834374  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:10.872915  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:10.916738  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:11.291290  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:11.331559  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:11.334031  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:11.417793  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:11.790136  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:11.831461  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:11.833318  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:11.917149  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:12.290432  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:12.338837  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:12.339199  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:12.416794  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:12.790990  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:12.831288  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:12.833411  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:12.917348  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:13.290536  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:13.331566  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:13.333966  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:13.371704  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:13.417661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:13.791210  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:13.832356  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:13.833458  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:13.868601  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:13.917434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:14.291703  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:14.331920  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:14.334649  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:14.417514  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:14.698106  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:14.698139  547669 retry.go:31] will retry after 5.682476792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:14.791441  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:14.831604  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:14.834089  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:14.916849  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:15.290927  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:15.330727  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:15.333277  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:15.372279  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:15.417432  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:15.790331  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:15.831265  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:15.833495  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:15.917282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:16.290544  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:16.330570  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:16.333876  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:16.417797  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:16.791498  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:16.830476  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:16.834068  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:16.916855  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:17.291561  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:17.331880  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:17.334625  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:17.372401  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:17.417167  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:17.790478  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:17.832011  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:17.834025  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:17.917776  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:18.290865  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:18.331366  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:18.333219  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:18.417055  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:18.791075  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:18.832075  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:18.833286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:18.917392  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:19.290622  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:19.330611  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:19.334015  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:19.372541  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:19.417706  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:19.791117  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:19.831481  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:19.833477  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:19.917253  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:20.290260  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:20.331466  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:20.333744  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:20.381745  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:20.417365  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:20.790568  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:20.831055  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:20.833698  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:20.917837  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:00:21.220446  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:21.220536  547669 retry.go:31] will retry after 12.384930157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:21.290123  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:21.331470  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:21.333584  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:21.418634  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:21.790969  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:21.831235  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:21.833396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:21.871992  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:21.916851  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:22.291180  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:22.332139  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:22.333322  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:22.416913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:22.791149  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:22.831032  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:22.833165  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:22.916759  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:23.291140  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:23.331288  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:23.333185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:23.417224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:23.790778  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:23.830861  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:23.833014  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:23.916673  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:24.290781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:24.330945  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:24.333301  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1014 20:00:24.372066  547669 node_ready.go:57] node "addons-864166" has "Ready":"False" status (will retry)
	I1014 20:00:24.417940  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:24.790906  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:24.830707  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:24.832985  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:24.917729  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.291188  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:25.331305  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:25.333299  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:25.416978  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.794600  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:25.832812  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:25.886296  547669 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 20:00:25.886321  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:25.898953  547669 node_ready.go:49] node "addons-864166" is "Ready"
	I1014 20:00:25.898986  547669 node_ready.go:38] duration metric: took 40.530003122s for node "addons-864166" to be "Ready" ...
	I1014 20:00:25.899000  547669 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:00:25.899075  547669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:00:25.929830  547669 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 20:00:25.929857  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:25.931030  547669 api_server.go:72] duration metric: took 42.731405309s to wait for apiserver process to appear ...
	I1014 20:00:25.931053  547669 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:00:25.931071  547669 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:00:25.945495  547669 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:00:25.949036  547669 api_server.go:141] control plane version: v1.34.1
	I1014 20:00:25.949069  547669 api_server.go:131] duration metric: took 18.007827ms to wait for apiserver health ...
	I1014 20:00:25.949079  547669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:00:25.981649  547669 system_pods.go:59] 19 kube-system pods found
	I1014 20:00:25.981685  547669 system_pods.go:61] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending
	I1014 20:00:25.981691  547669 system_pods.go:61] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending
	I1014 20:00:25.981701  547669 system_pods.go:61] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:25.981706  547669 system_pods.go:61] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending
	I1014 20:00:25.981712  547669 system_pods.go:61] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:25.981717  547669 system_pods.go:61] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:25.981752  547669 system_pods.go:61] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:25.981763  547669 system_pods.go:61] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:25.981768  547669 system_pods.go:61] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending
	I1014 20:00:25.981772  547669 system_pods.go:61] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:25.981777  547669 system_pods.go:61] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:25.981788  547669 system_pods.go:61] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:25.981793  547669 system_pods.go:61] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending
	I1014 20:00:25.981818  547669 system_pods.go:61] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:25.981832  547669 system_pods.go:61] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending
	I1014 20:00:25.981840  547669 system_pods.go:61] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending
	I1014 20:00:25.981850  547669 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:25.981857  547669 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending
	I1014 20:00:25.981867  547669 system_pods.go:61] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending
	I1014 20:00:25.981873  547669 system_pods.go:74] duration metric: took 32.788409ms to wait for pod list to return data ...
	I1014 20:00:25.981908  547669 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:00:26.065678  547669 default_sa.go:45] found service account: "default"
	I1014 20:00:26.065710  547669 default_sa.go:55] duration metric: took 83.789152ms for default service account to be created ...
	I1014 20:00:26.065721  547669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:00:26.143814  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.143852  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending
	I1014 20:00:26.143859  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending
	I1014 20:00:26.143867  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.143872  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending
	I1014 20:00:26.143917  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.143923  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.143928  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.143932  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.143944  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending
	I1014 20:00:26.143948  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.143953  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.143989  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.144001  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending
	I1014 20:00:26.144010  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.144019  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending
	I1014 20:00:26.144025  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending
	I1014 20:00:26.144032  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.144040  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending
	I1014 20:00:26.144044  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending
	I1014 20:00:26.144070  547669 retry.go:31] will retry after 294.445937ms: missing components: kube-dns
	I1014 20:00:26.295868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:26.336592  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:26.340662  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:26.422178  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:26.451634  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.451724  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:26.451749  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:26.451787  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.451813  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:26.451835  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.451859  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.451890  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.451915  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.451941  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:26.451963  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.451994  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.452020  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.452044  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:26.452067  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.452100  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:26.452130  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:26.452156  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.452183  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.452215  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:26.452251  547669 retry.go:31] will retry after 265.969577ms: missing components: kube-dns
	I1014 20:00:26.723698  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:26.723789  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:26.723816  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:26.723860  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:26.723891  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:26.723914  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:26.723938  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:26.723970  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:26.723998  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:26.724028  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:26.724051  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:26.724094  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:26.724123  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:26.724149  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:26.724174  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:26.724206  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:26.724233  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:26.724257  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.724284  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:26.724392  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:26.724431  547669 retry.go:31] will retry after 463.425484ms: missing components: kube-dns
	I1014 20:00:26.790895  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:26.831159  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:26.833395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:26.918012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:27.193260  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:27.193305  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:00:27.193315  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:27.193324  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:27.193330  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:27.193336  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:27.193342  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:27.193351  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:27.193356  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:27.193369  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:27.193375  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:27.193380  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:27.193392  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:27.193398  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:27.193412  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:27.193418  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:27.193425  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:27.193431  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.193440  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.193446  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:00:27.193467  547669 retry.go:31] will retry after 572.588227ms: missing components: kube-dns
	I1014 20:00:27.290859  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:27.333900  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:27.345018  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:27.420045  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:27.771640  547669 system_pods.go:86] 19 kube-system pods found
	I1014 20:00:27.771725  547669 system_pods.go:89] "coredns-66bc5c9577-9rwrs" [c47964fa-47a1-424c-a7eb-5b512551c245] Running
	I1014 20:00:27.771752  547669 system_pods.go:89] "csi-hostpath-attacher-0" [45ea89e5-0c4a-4a0e-bb7c-5cb62f77f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 20:00:27.771797  547669 system_pods.go:89] "csi-hostpath-resizer-0" [6f065c64-49b4-4b19-8928-ef3f62e74cf8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 20:00:27.771825  547669 system_pods.go:89] "csi-hostpathplugin-svw4p" [f1b998c8-61ab-40b8-b449-1081efd7beff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 20:00:27.771845  547669 system_pods.go:89] "etcd-addons-864166" [8d1396ae-3e63-45fe-9ee8-94a86740b503] Running
	I1014 20:00:27.771869  547669 system_pods.go:89] "kindnet-68wfd" [5641ead2-4190-4e48-b41f-07309a894764] Running
	I1014 20:00:27.771900  547669 system_pods.go:89] "kube-apiserver-addons-864166" [ba844bf8-6968-48c5-bc90-642f6e308be5] Running
	I1014 20:00:27.771920  547669 system_pods.go:89] "kube-controller-manager-addons-864166" [f2f2b122-e41d-411b-9f5f-0934a2d2bcc6] Running
	I1014 20:00:27.771941  547669 system_pods.go:89] "kube-ingress-dns-minikube" [4c972b0f-59b8-4f1b-aff0-86477fdb31a8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 20:00:27.771962  547669 system_pods.go:89] "kube-proxy-qkhq5" [63c22dc1-5b7b-43b2-b879-fba9222761a7] Running
	I1014 20:00:27.771984  547669 system_pods.go:89] "kube-scheduler-addons-864166" [1e259b7d-9efa-46e4-a5ed-023f7e6ebfdf] Running
	I1014 20:00:27.772019  547669 system_pods.go:89] "metrics-server-85b7d694d7-q2bhj" [bf121c3b-58d6-4f5d-9ec7-3f6356b23132] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 20:00:27.772041  547669 system_pods.go:89] "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 20:00:27.772066  547669 system_pods.go:89] "registry-6b586f9694-ph85v" [5bda4348-2158-45b7-81f9-366fd18b681a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 20:00:27.772098  547669 system_pods.go:89] "registry-creds-764b6fb674-6rp57" [16928613-b3d8-4dca-854f-3b08843dd017] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 20:00:27.772125  547669 system_pods.go:89] "registry-proxy-ww88v" [b91a51f7-e2da-4901-8d80-579ad73448e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 20:00:27.772150  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c992l" [13663a37-601e-474f-a20d-f0a7c1597a30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.772177  547669 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k524x" [d44ca9f8-5376-4e01-b3a9-2050733f2b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 20:00:27.772209  547669 system_pods.go:89] "storage-provisioner" [ba401d7b-7ea2-478c-876b-27114aea306d] Running
	I1014 20:00:27.772243  547669 system_pods.go:126] duration metric: took 1.706513602s to wait for k8s-apps to be running ...
	I1014 20:00:27.772267  547669 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:00:27.772384  547669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:00:27.786271  547669 system_svc.go:56] duration metric: took 13.996661ms WaitForService to wait for kubelet
	I1014 20:00:27.786300  547669 kubeadm.go:586] duration metric: took 44.586678275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:00:27.786320  547669 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:00:27.790619  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:27.791439  547669 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:00:27.791470  547669 node_conditions.go:123] node cpu capacity is 2
	I1014 20:00:27.791483  547669 node_conditions.go:105] duration metric: took 5.157497ms to run NodePressure ...
	I1014 20:00:27.791495  547669 start.go:241] waiting for startup goroutines ...
	I1014 20:00:27.831710  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:27.833646  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:27.918551  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:28.291228  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:28.392121  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:28.392390  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:28.418945  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:28.790640  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:28.831142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:28.833505  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:28.919189  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:29.290777  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:29.331250  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:29.333242  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:29.420815  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:29.791354  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:29.833265  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:29.835464  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:29.918500  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:30.293998  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:30.333254  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:30.335890  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:30.417452  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:30.792922  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:30.832639  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:30.835590  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:30.918301  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:31.290618  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:31.332496  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:31.335753  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:31.420582  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:31.790969  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:31.834488  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:31.835097  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:31.927282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:32.291410  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:32.331337  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:32.333627  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:32.417956  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:32.791451  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:32.832609  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:32.834449  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:32.918119  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:33.290666  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:33.331061  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:33.333761  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:33.417702  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:33.606205  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:33.791419  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:33.832888  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:33.834672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:33.917185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:34.290974  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:34.331242  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:34.333930  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:34.417429  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:34.665080  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.058825491s)
	W1014 20:00:34.665157  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:34.665193  547669 retry.go:31] will retry after 24.50091787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:00:34.792200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:34.832239  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:34.835057  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:34.918545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:35.291130  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:35.331926  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:35.334951  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:35.418578  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:35.791059  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:35.831985  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:35.835060  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:35.918091  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:36.291713  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:36.331310  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:36.333880  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:36.417882  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:36.790892  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:36.832664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:36.834567  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:36.918483  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:37.291050  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:37.392795  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:37.392838  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:37.416965  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:37.791087  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:37.833588  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:37.835812  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:37.917680  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:38.291133  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:38.333167  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:38.335059  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:38.418216  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:38.791902  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:38.831363  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:38.834163  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:38.918000  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:39.291175  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:39.331320  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:39.333793  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:39.420029  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:39.791661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:39.835377  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:39.836373  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:39.919027  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:40.291397  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:40.332706  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:40.334448  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:40.418416  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:40.792458  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:40.834137  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:40.835772  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:40.917588  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:41.297213  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:41.333108  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:41.335241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:41.418255  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:41.791514  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:41.830718  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:41.833272  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:41.917598  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:42.291224  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:42.332980  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:42.335509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:42.417583  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:42.791799  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:42.831236  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:42.833840  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:42.917114  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:43.290618  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:43.330645  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:43.333241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:43.417626  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:43.791778  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:43.831348  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:43.833850  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:43.917290  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:44.294631  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:44.391155  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:44.391362  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:44.418427  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:44.790847  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:44.831486  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:44.833661  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:44.917913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:45.291588  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:45.332586  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:45.337335  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:45.417900  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:45.790822  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:45.831056  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:45.833545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:45.917776  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:46.290728  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:46.331536  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:46.334450  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:46.418742  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:46.792055  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:46.832766  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:46.835166  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:46.917855  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:47.292899  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:47.331199  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:47.333697  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:47.418311  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:47.791185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:47.831396  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:47.834442  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:47.917705  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:48.290088  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:48.331142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:48.333550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:48.417850  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:48.791533  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:48.830605  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:48.834383  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:48.917648  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:49.291533  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:49.330599  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:49.340502  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:49.418391  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:49.791464  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:49.832891  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:49.835350  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:49.918624  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:50.290877  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:50.331590  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:50.334740  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:50.418617  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:50.791077  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:50.831164  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:50.832940  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:50.917200  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:51.290573  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:51.330549  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:51.334160  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:51.417761  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:51.791438  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:51.831562  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:51.834285  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:51.917564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:52.291315  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:52.333885  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:52.336203  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:52.418421  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:52.792245  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:52.832506  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:52.833766  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:52.917003  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:53.291169  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:53.331207  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:53.333469  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:53.417910  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:53.790991  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:53.831026  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:53.833648  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:53.917921  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:54.291003  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:54.332037  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:54.334396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:54.417680  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:54.791287  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:54.832081  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:54.836112  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:54.918296  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:55.290896  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:55.332696  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:55.334500  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:55.418126  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:55.790452  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:55.831743  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:55.834677  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:55.918168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:56.291242  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:56.331664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:56.334593  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:56.417663  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:56.791148  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:56.832084  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:56.833672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:56.918054  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:57.291681  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:57.330916  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:57.333728  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 20:00:57.416968  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:57.791439  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:57.831727  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:57.833754  547669 kapi.go:107] duration metric: took 1m8.503132272s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 20:00:57.917282  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:58.290781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:58.331430  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:58.418408  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:58.792649  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:58.831126  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:58.917803  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:59.166923  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:00:59.302851  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:59.334900  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:59.420168  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:00:59.790898  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:00:59.833062  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:00:59.918818  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:00.330902  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:00.355811  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:00.419987  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:00.791878  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:00.831206  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:00.893760  547669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.726801858s)
	W1014 20:01:00.893802  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:01:00.893821  547669 retry.go:31] will retry after 20.531033112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:01:00.917284  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:01.291917  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:01.332675  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:01.418948  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:01.791007  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:01.831462  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:01.917814  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:02.290964  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:02.331358  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:02.417647  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:02.794657  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:02.832175  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:02.945009  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:03.290531  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:03.330931  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:03.417564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:03.791438  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:03.831949  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:03.917732  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:04.290234  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:04.331251  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:04.418233  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:04.790672  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:04.833142  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:04.917709  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:05.291263  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:05.331508  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:05.417834  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:05.791185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:05.831699  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:05.917206  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:06.290426  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:06.330533  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:06.417738  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:06.790868  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:06.832127  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:06.922509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:07.291297  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:07.331445  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:07.418129  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:07.790564  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:07.831423  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:07.918208  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:08.290275  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:08.331278  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:08.417366  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:08.790662  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:08.832084  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:08.917986  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:09.291509  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:09.331684  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:09.417208  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:09.791395  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:09.831858  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:09.917729  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:10.291757  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:10.332810  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:10.417396  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:10.791270  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:10.831490  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:10.919822  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:11.290886  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:11.331361  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:11.418994  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:11.790630  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:11.832008  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:11.933789  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:12.291617  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:12.331129  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:12.417903  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:12.791490  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:12.831119  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:12.919640  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:13.290545  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:13.331683  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:13.419264  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:13.793810  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:13.831141  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:13.917781  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:14.290722  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:14.331186  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:14.420492  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:14.791167  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:14.841216  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:14.918400  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:15.290842  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:15.331455  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:15.418122  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:15.790605  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:15.830816  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:15.917821  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:16.291185  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:16.331915  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:16.417698  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:16.791532  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:16.830743  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:16.918385  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:17.290944  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:17.331211  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:17.416930  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:17.792188  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:17.831864  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:17.917628  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:18.291148  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:18.331766  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:18.417436  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:18.790735  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:18.831083  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:18.917696  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:19.291142  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:19.331079  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:19.417591  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:19.790410  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:19.831257  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:19.917241  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:20.291403  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:20.331663  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:20.417800  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:20.790825  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:20.830983  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:20.918280  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:21.290598  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:21.330720  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:21.417475  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:21.425541  547669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 20:01:21.791384  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:21.831836  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:21.918604  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 20:01:22.261734  547669 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:01:22.261858  547669 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:01:22.291523  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:22.331366  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:22.417286  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:22.790491  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:22.830576  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:22.918048  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:23.291046  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:23.331143  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:23.417792  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:23.790865  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:23.831187  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:23.917228  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:24.290668  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:24.330947  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:24.418455  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:24.791525  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:24.830876  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:24.918633  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:25.291466  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:25.332017  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:25.417538  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:25.791137  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:25.831301  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:25.919602  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:26.291717  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:26.331591  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:26.420746  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:26.793299  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:26.831896  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:26.921264  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:27.290578  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:27.330903  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:27.417101  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:27.813806  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:27.900010  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:27.917064  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:28.290117  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:28.331134  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:28.417815  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:28.803074  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:28.843102  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:28.917797  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:29.290918  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:29.331664  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:29.418646  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:29.791550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:29.831393  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:29.918152  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:30.292033  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:30.332749  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:30.419556  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:30.791019  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:30.831571  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:30.918501  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:31.309415  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:31.332151  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:31.417625  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:31.791333  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:31.832437  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:31.918037  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:32.291033  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:32.392347  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:32.417550  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:32.790831  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:32.830923  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:32.917695  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:33.291310  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:33.331330  547669 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 20:01:33.417590  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:33.791100  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:33.863646  547669 kapi.go:107] duration metric: took 1m44.536119588s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 20:01:33.917714  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:34.291050  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:34.418669  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:34.790913  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:34.917694  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:35.295990  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:35.417961  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:35.790402  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:35.918157  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:36.291268  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:36.417804  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:36.789945  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:36.917548  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:37.291424  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:37.418329  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:37.790425  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:37.917660  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:38.292560  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:38.418012  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:38.791711  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 20:01:38.918873  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:39.291408  547669 kapi.go:107] duration metric: took 1m46.504065867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 20:01:39.294647  547669 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-864166 cluster.
	I1014 20:01:39.297612  547669 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 20:01:39.300552  547669 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 20:01:39.418434  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:39.916744  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:40.418106  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:40.918137  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:41.417563  547669 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 20:01:41.924049  547669 kapi.go:107] duration metric: took 1m52.010270923s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 20:01:41.927242  547669 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1014 20:01:41.930048  547669 addons.go:514] duration metric: took 1m58.729976641s for enable addons: enabled=[nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1014 20:01:41.930107  547669 start.go:246] waiting for cluster config update ...
	I1014 20:01:41.930127  547669 start.go:255] writing updated cluster config ...
	I1014 20:01:41.930438  547669 ssh_runner.go:195] Run: rm -f paused
	I1014 20:01:41.934536  547669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:01:41.938622  547669 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9rwrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.945124  547669 pod_ready.go:94] pod "coredns-66bc5c9577-9rwrs" is "Ready"
	I1014 20:01:41.945153  547669 pod_ready.go:86] duration metric: took 6.502213ms for pod "coredns-66bc5c9577-9rwrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.947337  547669 pod_ready.go:83] waiting for pod "etcd-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.951452  547669 pod_ready.go:94] pod "etcd-addons-864166" is "Ready"
	I1014 20:01:41.951475  547669 pod_ready.go:86] duration metric: took 4.112058ms for pod "etcd-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.953399  547669 pod_ready.go:83] waiting for pod "kube-apiserver-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.957365  547669 pod_ready.go:94] pod "kube-apiserver-addons-864166" is "Ready"
	I1014 20:01:41.957391  547669 pod_ready.go:86] duration metric: took 3.966004ms for pod "kube-apiserver-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:41.959441  547669 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.338939  547669 pod_ready.go:94] pod "kube-controller-manager-addons-864166" is "Ready"
	I1014 20:01:42.338980  547669 pod_ready.go:86] duration metric: took 379.51527ms for pod "kube-controller-manager-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.538690  547669 pod_ready.go:83] waiting for pod "kube-proxy-qkhq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:42.938156  547669 pod_ready.go:94] pod "kube-proxy-qkhq5" is "Ready"
	I1014 20:01:42.938196  547669 pod_ready.go:86] duration metric: took 399.478422ms for pod "kube-proxy-qkhq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.138831  547669 pod_ready.go:83] waiting for pod "kube-scheduler-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.538508  547669 pod_ready.go:94] pod "kube-scheduler-addons-864166" is "Ready"
	I1014 20:01:43.538542  547669 pod_ready.go:86] duration metric: took 399.680812ms for pod "kube-scheduler-addons-864166" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:01:43.538555  547669 pod_ready.go:40] duration metric: took 1.603985321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:01:43.592686  547669 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 20:01:43.597806  547669 out.go:179] * Done! kubectl is now configured to use "addons-864166" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:01:41 addons-864166 crio[831]: time="2025-10-14T20:01:41.833237418Z" level=info msg="Created container 78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b: kube-system/csi-hostpathplugin-svw4p/csi-snapshotter" id=a5e9aeff-d629-43a6-8631-52cdc3c10b40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:01:41 addons-864166 crio[831]: time="2025-10-14T20:01:41.834309507Z" level=info msg="Starting container: 78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b" id=ddd14ff6-2be6-417b-ac4b-c40ea494d6ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:01:41 addons-864166 crio[831]: time="2025-10-14T20:01:41.839036178Z" level=info msg="Started container" PID=4977 containerID=78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b description=kube-system/csi-hostpathplugin-svw4p/csi-snapshotter id=ddd14ff6-2be6-417b-ac4b-c40ea494d6ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=80793f4f6f28f61ab512b23b1d18cdef95041fee1f57966fae8689f7877b6f7d
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.601150982Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d5cf671d-aea6-437e-9679-f27109686ad7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.601232534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.607883122Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461 UID:14edcf26-2792-4adf-8b7b-e4965d42ac66 NetNS:/var/run/netns/f442694a-dd15-4f4a-8364-beb69b53002f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001a30560}] Aliases:map[]}"
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.60804026Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.620041135Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461 UID:14edcf26-2792-4adf-8b7b-e4965d42ac66 NetNS:/var/run/netns/f442694a-dd15-4f4a-8364-beb69b53002f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001a30560}] Aliases:map[]}"
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.620192046Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.623880926Z" level=info msg="Ran pod sandbox 97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461 with infra container: default/busybox/POD" id=d5cf671d-aea6-437e-9679-f27109686ad7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.625185092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84af675d-c7f2-4a9d-8a7c-02c9b01aa068 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.625315777Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=84af675d-c7f2-4a9d-8a7c-02c9b01aa068 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.625368094Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=84af675d-c7f2-4a9d-8a7c-02c9b01aa068 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.626302359Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a97086c8-391a-4908-9c72-c048245b590c name=/runtime.v1.ImageService/PullImage
	Oct 14 20:01:44 addons-864166 crio[831]: time="2025-10-14T20:01:44.627771196Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.592193978Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a97086c8-391a-4908-9c72-c048245b590c name=/runtime.v1.ImageService/PullImage
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.592774302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5db3bd5d-a6ac-42c1-8d4f-676a539e4536 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.595021447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=57e82ba5-7385-4b8b-9412-d63b036cd40c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.600658958Z" level=info msg="Creating container: default/busybox/busybox" id=5c4c3a07-07f2-4e3c-8beb-27ba4080ab59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.60144054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.607906978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.608484348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.62404684Z" level=info msg="Created container e386ad6e060c046d1723455d0beaca548019a2f9488cd94b51f430d2fd6a8ccf: default/busybox/busybox" id=5c4c3a07-07f2-4e3c-8beb-27ba4080ab59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.624932752Z" level=info msg="Starting container: e386ad6e060c046d1723455d0beaca548019a2f9488cd94b51f430d2fd6a8ccf" id=89cfe4b1-7652-4b94-85cd-d5b20c9d473a name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:01:46 addons-864166 crio[831]: time="2025-10-14T20:01:46.627472923Z" level=info msg="Started container" PID=5068 containerID=e386ad6e060c046d1723455d0beaca548019a2f9488cd94b51f430d2fd6a8ccf description=default/busybox/busybox id=89cfe4b1-7652-4b94-85cd-d5b20c9d473a name=/runtime.v1.RuntimeService/StartContainer sandboxID=97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e386ad6e060c0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   97de4626f0493       busybox                                     default
	78ef6dffdb260       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	3ed114fa3e5da       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	fc727586dde63       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	92f9c104d49fb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 18 seconds ago       Running             gcp-auth                                 0                   bea82525dfd78       gcp-auth-78565c9fb4-gxbj2                   gcp-auth
	7ef76141e9242       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           21 seconds ago       Running             hostpath                                 0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	e0e7e8627c210       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             23 seconds ago       Running             controller                               0                   204117cfa07fd       ingress-nginx-controller-675c5ddd98-gph67   ingress-nginx
	6ef3702df3f19       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            30 seconds ago       Running             gadget                                   0                   186b7daebd493       gadget-jpdp2                                gadget
	c5c1390f4e419       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                40 seconds ago       Running             node-driver-registrar                    0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	08cb4838bbaf5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              41 seconds ago       Running             yakd                                     0                   f55b6593b7e29       yakd-dashboard-5ff678cb9-rfjnt              yakd-dashboard
	67f5b6eccde75       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   45 seconds ago       Exited              patch                                    0                   eab771aef57f4       ingress-nginx-admission-patch-hmv8p         ingress-nginx
	e4e1818ef03f8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   45 seconds ago       Exited              create                                   0                   ff7f6c6d02b88       gcp-auth-certs-create-fndhd                 gcp-auth
	f351a573847cd       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     45 seconds ago       Running             nvidia-device-plugin-ctr                 0                   11cbf0d919237       nvidia-device-plugin-daemonset-cgtft        kube-system
	ec347adf1bf17       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             45 seconds ago       Exited              patch                                    3                   ddc725586be58       gcp-auth-certs-patch-5gztv                  gcp-auth
	6b79820d7c3cc       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   5ee9ff1e32616       snapshot-controller-7d9fbc56b8-k524x        kube-system
	5c84e205968bf       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   a38edbbc35c59       registry-proxy-ww88v                        kube-system
	bb4fccdbcf0f5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   71392ba204600       csi-hostpath-attacher-0                     kube-system
	1313169fc23bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   6d95eeeebb5fb       ingress-nginx-admission-create-fzdhr        ingress-nginx
	3ca6b7f2cc2b0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   dc7e7d460397d       snapshot-controller-7d9fbc56b8-c992l        kube-system
	6e40d4e7135a0       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   b054b9bb65849       csi-hostpath-resizer-0                      kube-system
	51b5289a11b57       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   57c59cc0f565c       cloud-spanner-emulator-86bd5cbb97-gz4h4     default
	3e1c1eae40cfd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   80793f4f6f28f       csi-hostpathplugin-svw4p                    kube-system
	8b466c8617b98       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   835d1e5a47c51       registry-6b586f9694-ph85v                   kube-system
	576ef271047e0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   0060b955dbf11       local-path-provisioner-648f6765c9-g88bj     local-path-storage
	b80cf57eb9e2b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   bbc57ed32489e       kube-ingress-dns-minikube                   kube-system
	a327e10ba7623       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   1063f690cee69       metrics-server-85b7d694d7-q2bhj             kube-system
	7d43ec8e4714e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   8386ddb409201       coredns-66bc5c9577-9rwrs                    kube-system
	a97b20dece5ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   46c0f84b66bc0       storage-provisioner                         kube-system
	42ea884de055b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   39cbfde4117ab       kube-proxy-qkhq5                            kube-system
	85281702b4961       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   de5337d58aac0       kindnet-68wfd                               kube-system
	b10c05c5f1576       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   5bcedea5e98b2       kube-apiserver-addons-864166                kube-system
	9a2e8bf9aef9f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   bb2583320595d       kube-scheduler-addons-864166                kube-system
	e4d0d5d58460a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   4b14e8eefb0b9       kube-controller-manager-addons-864166       kube-system
	2db14186188f0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   98bf5cc734261       etcd-addons-864166                          kube-system
	
	
	==> coredns [7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0] <==
	[INFO] 10.244.0.9:34505 - 64940 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000120625s
	[INFO] 10.244.0.9:34505 - 5168 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00230057s
	[INFO] 10.244.0.9:34505 - 34696 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002366129s
	[INFO] 10.244.0.9:34505 - 44228 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118s
	[INFO] 10.244.0.9:34505 - 21731 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000194062s
	[INFO] 10.244.0.9:38546 - 29470 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000202809s
	[INFO] 10.244.0.9:38546 - 29248 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00026827s
	[INFO] 10.244.0.9:38341 - 20159 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113233s
	[INFO] 10.244.0.9:38341 - 19954 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073937s
	[INFO] 10.244.0.9:58158 - 9788 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087197s
	[INFO] 10.244.0.9:58158 - 9591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078385s
	[INFO] 10.244.0.9:33675 - 21311 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001235808s
	[INFO] 10.244.0.9:33675 - 21097 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001177928s
	[INFO] 10.244.0.9:51865 - 41272 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113946s
	[INFO] 10.244.0.9:51865 - 41100 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158632s
	[INFO] 10.244.0.21:59072 - 4005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212581s
	[INFO] 10.244.0.21:41608 - 55500 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212672s
	[INFO] 10.244.0.21:33590 - 62115 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201873s
	[INFO] 10.244.0.21:49706 - 15678 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000067496s
	[INFO] 10.244.0.21:55963 - 61985 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000176413s
	[INFO] 10.244.0.21:44971 - 59639 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066224s
	[INFO] 10.244.0.21:59888 - 958 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002467431s
	[INFO] 10.244.0.21:40874 - 31423 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002297493s
	[INFO] 10.244.0.21:44953 - 17366 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003086813s
	[INFO] 10.244.0.21:33450 - 9935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003868468s
	
	
	==> describe nodes <==
	Name:               addons-864166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-864166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=addons-864166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T19_59_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864166
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864166"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 19:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864166
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:01:41 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:01:41 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:01:41 +0000   Tue, 14 Oct 2025 19:59:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:01:41 +0000   Tue, 14 Oct 2025 20:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-864166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 39850f21655f4698832fc12574d92819
	  System UUID:                379c39e7-3898-4b4a-b13c-d3b0f6833857
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-86bd5cbb97-gz4h4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  gadget                      gadget-jpdp2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  gcp-auth                    gcp-auth-78565c9fb4-gxbj2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gcp-auth                    gcp-auth-certs-patch-5gztv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gph67    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m8s
	  kube-system                 coredns-66bc5c9577-9rwrs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 csi-hostpathplugin-svw4p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 etcd-addons-864166                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-68wfd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-addons-864166                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-addons-864166        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-qkhq5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-addons-864166                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 metrics-server-85b7d694d7-q2bhj              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m9s
	  kube-system                 nvidia-device-plugin-daemonset-cgtft         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-6b586f9694-ph85v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 registry-creds-764b6fb674-6rp57              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-proxy-ww88v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 snapshot-controller-7d9fbc56b8-c992l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 snapshot-controller-7d9fbc56b8-k524x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  local-path-storage          local-path-provisioner-648f6765c9-g88bj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rfjnt               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m11s  kube-proxy       
	  Normal   Starting                 2m20s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s  kubelet          Node addons-864166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s  kubelet          Node addons-864166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s  kubelet          Node addons-864166 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m15s  node-controller  Node addons-864166 event: Registered Node addons-864166 in Controller
	  Normal   NodeReady                92s    kubelet          Node addons-864166 status is now: NodeReady
	
	
	==> dmesg <==
	[ +31.622660] overlayfs: idmapped layers are currently not supported
	[Oct14 19:29] overlayfs: idmapped layers are currently not supported
	[Oct14 19:30] overlayfs: idmapped layers are currently not supported
	[Oct14 19:32] overlayfs: idmapped layers are currently not supported
	[Oct14 19:38] overlayfs: idmapped layers are currently not supported
	[Oct14 19:41] overlayfs: idmapped layers are currently not supported
	[ +36.095229] overlayfs: idmapped layers are currently not supported
	[Oct14 19:42] overlayfs: idmapped layers are currently not supported
	[ +18.587469] overlayfs: idmapped layers are currently not supported
	[Oct14 19:44] overlayfs: idmapped layers are currently not supported
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b] <==
	{"level":"warn","ts":"2025-10-14T19:59:34.084424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.103204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.134340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.170342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.195092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.226880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.245899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.323660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.352682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.404122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.437954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.439904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.462870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.488994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.534111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.561828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.568167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.591659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:34.684408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:50.008387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T19:59:50.029070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.316968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.336533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.486929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:00:12.501956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58858","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [92f9c104d49fbb70f558252b04eb0b8b220a5e848af1bfa501480caaa1efebd9] <==
	2025/10/14 20:01:38 GCP Auth Webhook started!
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:17854: remote error: tls: bad certificate
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:25338: remote error: tls: bad certificate
	2025/10/14 20:01:44 http: TLS handshake error from 10.244.0.1:23767: remote error: tls: bad certificate
	
	
	==> kernel <==
	 20:01:57 up  2:44,  0 user,  load average: 3.88, 2.64, 2.61
	Linux addons-864166 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c] <==
	E1014 20:00:15.298333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 20:00:15.298388       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 20:00:16.697848       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 20:00:16.697989       1 metrics.go:72] Registering metrics
	I1014 20:00:16.698094       1 controller.go:711] "Syncing nftables rules"
	I1014 20:00:25.296437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:00:25.296634       1 main.go:301] handling current node
	I1014 20:00:35.294469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:00:35.294517       1 main.go:301] handling current node
	I1014 20:00:45.291988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:00:45.292029       1 main.go:301] handling current node
	I1014 20:00:55.292181       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:00:55.292214       1 main.go:301] handling current node
	I1014 20:01:05.295694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:05.295724       1 main.go:301] handling current node
	I1014 20:01:15.292950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:15.293000       1 main.go:301] handling current node
	I1014 20:01:25.293101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:25.293140       1 main.go:301] handling current node
	I1014 20:01:35.292694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:35.292835       1 main.go:301] handling current node
	I1014 20:01:45.292710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:45.292886       1 main.go:301] handling current node
	I1014 20:01:55.295221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:01:55.295345       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1014 20:00:32.937743       1 handler_proxy.go:99] no RequestInfo found in the context
	W1014 20:00:32.937823       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:00:32.937884       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:00:32.937895       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1014 20:00:32.937916       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:00:32.939024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:00:36.945920       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:00:36.945984       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1014 20:00:36.947869       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.123:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1014 20:00:37.000064       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1014 20:00:37.024929       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W1014 20:01:44.037483       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.037531       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	W1014 20:01:44.277193       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.277287       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	W1014 20:01:44.374485       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
	E1014 20:01:44.374588       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": tls: failed to verify certificate: x509: certificate signed by unknown authority" logger="UnhandledError"
	
	
	==> kube-controller-manager [e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542] <==
	I1014 19:59:42.351988       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 19:59:42.351999       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 19:59:42.352585       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 19:59:42.352602       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 19:59:42.352611       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 19:59:42.355866       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 19:59:42.356068       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 19:59:42.356133       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 19:59:42.356165       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 19:59:42.356193       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 19:59:42.366635       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1014 19:59:42.372398       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-864166" podCIDRs=["10.244.0.0/24"]
	I1014 19:59:42.400989       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 19:59:42.401013       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 19:59:42.401020       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 19:59:42.467284       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1014 19:59:48.036978       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1014 20:00:12.309501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:00:12.309668       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1014 20:00:12.309718       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1014 20:00:12.410449       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:00:12.475209       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1014 20:00:12.479685       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1014 20:00:12.580841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:00:27.360085       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d] <==
	I1014 19:59:45.211201       1 server_linux.go:53] "Using iptables proxy"
	I1014 19:59:45.381935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 19:59:45.501874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 19:59:45.501916       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 19:59:45.502007       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 19:59:45.543195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 19:59:45.543254       1 server_linux.go:132] "Using iptables Proxier"
	I1014 19:59:45.555999       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 19:59:45.556330       1 server.go:527] "Version info" version="v1.34.1"
	I1014 19:59:45.556352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 19:59:45.557721       1 config.go:200] "Starting service config controller"
	I1014 19:59:45.557732       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 19:59:45.557756       1 config.go:106] "Starting endpoint slice config controller"
	I1014 19:59:45.557761       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 19:59:45.557773       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 19:59:45.557777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 19:59:45.558431       1 config.go:309] "Starting node config controller"
	I1014 19:59:45.558438       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 19:59:45.558444       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 19:59:45.658890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 19:59:45.658928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 19:59:45.658972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5] <==
	E1014 19:59:35.495581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:59:35.495706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 19:59:35.495858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 19:59:35.495964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 19:59:35.496060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 19:59:35.496145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:59:35.496269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:59:35.496368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:59:35.496505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 19:59:35.496567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 19:59:35.496658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:59:35.494753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 19:59:36.314025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 19:59:36.314182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 19:59:36.362504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 19:59:36.362664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:59:36.425670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:59:36.481820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:59:36.510151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:59:36.572937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:59:36.607415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 19:59:36.637726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 19:59:36.669497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 19:59:36.689035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1014 19:59:37.149859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:01:13 addons-864166 kubelet[1310]: I1014 20:01:13.726244    1310 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7f6c6d02b8837b3c5dbec967575bed7be971a2ecb15425729c481e832768f7"
	Oct 14 20:01:13 addons-864166 kubelet[1310]: I1014 20:01:13.875169    1310 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq7j2\" (UniqueName: \"kubernetes.io/projected/ff1a59dd-f94e-44c4-96a9-7e08dbb30be4-kube-api-access-bq7j2\") pod \"ff1a59dd-f94e-44c4-96a9-7e08dbb30be4\" (UID: \"ff1a59dd-f94e-44c4-96a9-7e08dbb30be4\") "
	Oct 14 20:01:13 addons-864166 kubelet[1310]: I1014 20:01:13.877654    1310 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1a59dd-f94e-44c4-96a9-7e08dbb30be4-kube-api-access-bq7j2" (OuterVolumeSpecName: "kube-api-access-bq7j2") pod "ff1a59dd-f94e-44c4-96a9-7e08dbb30be4" (UID: "ff1a59dd-f94e-44c4-96a9-7e08dbb30be4"). InnerVolumeSpecName "kube-api-access-bq7j2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 14 20:01:13 addons-864166 kubelet[1310]: I1014 20:01:13.976373    1310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bq7j2\" (UniqueName: \"kubernetes.io/projected/ff1a59dd-f94e-44c4-96a9-7e08dbb30be4-kube-api-access-bq7j2\") on node \"addons-864166\" DevicePath \"\""
	Oct 14 20:01:14 addons-864166 kubelet[1310]: I1014 20:01:14.731038    1310 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab771aef57f4ebf1b28e2d65281535c744a89c86f02f34c24426cb379101fbb"
	Oct 14 20:01:22 addons-864166 kubelet[1310]: I1014 20:01:22.990006    1310 scope.go:117] "RemoveContainer" containerID="ec347adf1bf172832abb8f56f6a4b0b990fff9cc4d4b485a93e8ca69a2f5cdda"
	Oct 14 20:01:22 addons-864166 kubelet[1310]: E1014 20:01:22.990672    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 40s restarting failed container=patch pod=gcp-auth-certs-patch-5gztv_gcp-auth(5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb)\"" pod="gcp-auth/gcp-auth-certs-patch-5gztv" podUID="5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb"
	Oct 14 20:01:23 addons-864166 kubelet[1310]: I1014 20:01:23.003649    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-rfjnt" podStartSLOduration=47.144716536 podStartE2EDuration="1m35.003626353s" podCreationTimestamp="2025-10-14 19:59:48 +0000 UTC" firstStartedPulling="2025-10-14 20:00:27.627192483 +0000 UTC m=+49.800298036" lastFinishedPulling="2025-10-14 20:01:15.4861023 +0000 UTC m=+97.659207853" observedRunningTime="2025-10-14 20:01:15.751531849 +0000 UTC m=+97.924637434" watchObservedRunningTime="2025-10-14 20:01:23.003626353 +0000 UTC m=+105.176731914"
	Oct 14 20:01:27 addons-864166 kubelet[1310]: I1014 20:01:27.779290    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-jpdp2" podStartSLOduration=66.378969859 podStartE2EDuration="1m39.779271539s" podCreationTimestamp="2025-10-14 19:59:48 +0000 UTC" firstStartedPulling="2025-10-14 20:00:52.720115089 +0000 UTC m=+74.893220642" lastFinishedPulling="2025-10-14 20:01:26.120416769 +0000 UTC m=+108.293522322" observedRunningTime="2025-10-14 20:01:26.826951377 +0000 UTC m=+109.000056954" watchObservedRunningTime="2025-10-14 20:01:27.779271539 +0000 UTC m=+109.952377108"
	Oct 14 20:01:29 addons-864166 kubelet[1310]: E1014 20:01:29.624123    1310 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 14 20:01:29 addons-864166 kubelet[1310]: E1014 20:01:29.624223    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16928613-b3d8-4dca-854f-3b08843dd017-gcr-creds podName:16928613-b3d8-4dca-854f-3b08843dd017 nodeName:}" failed. No retries permitted until 2025-10-14 20:02:33.624203222 +0000 UTC m=+175.797308783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/16928613-b3d8-4dca-854f-3b08843dd017-gcr-creds") pod "registry-creds-764b6fb674-6rp57" (UID: "16928613-b3d8-4dca-854f-3b08843dd017") : secret "registry-creds-gcr" not found
	Oct 14 20:01:33 addons-864166 kubelet[1310]: I1014 20:01:33.864288    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-gph67" podStartSLOduration=70.129539196 podStartE2EDuration="1m44.864269246s" podCreationTimestamp="2025-10-14 19:59:49 +0000 UTC" firstStartedPulling="2025-10-14 20:00:59.002838878 +0000 UTC m=+81.175944447" lastFinishedPulling="2025-10-14 20:01:33.737568936 +0000 UTC m=+115.910674497" observedRunningTime="2025-10-14 20:01:33.863895267 +0000 UTC m=+116.037000836" watchObservedRunningTime="2025-10-14 20:01:33.864269246 +0000 UTC m=+116.037374807"
	Oct 14 20:01:35 addons-864166 kubelet[1310]: I1014 20:01:35.990200    1310 scope.go:117] "RemoveContainer" containerID="ec347adf1bf172832abb8f56f6a4b0b990fff9cc4d4b485a93e8ca69a2f5cdda"
	Oct 14 20:01:35 addons-864166 kubelet[1310]: E1014 20:01:35.991122    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 40s restarting failed container=patch pod=gcp-auth-certs-patch-5gztv_gcp-auth(5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb)\"" pod="gcp-auth/gcp-auth-certs-patch-5gztv" podUID="5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb"
	Oct 14 20:01:36 addons-864166 kubelet[1310]: I1014 20:01:36.201962    1310 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 14 20:01:36 addons-864166 kubelet[1310]: I1014 20:01:36.202033    1310 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 14 20:01:38 addons-864166 kubelet[1310]: E1014 20:01:38.184556    1310 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c95f9af1d5b3ad1c9f6b81dea22f4d40889d07fd0f0f8ce400ca740103c9ad1b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c95f9af1d5b3ad1c9f6b81dea22f4d40889d07fd0f0f8ce400ca740103c9ad1b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-5gztv_5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-5gztv_5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb/patch/1.log: no such file or directory
	Oct 14 20:01:38 addons-864166 kubelet[1310]: I1014 20:01:38.886164    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-gxbj2" podStartSLOduration=101.213805859 podStartE2EDuration="1m46.886147727s" podCreationTimestamp="2025-10-14 19:59:52 +0000 UTC" firstStartedPulling="2025-10-14 20:01:32.324226661 +0000 UTC m=+114.497332214" lastFinishedPulling="2025-10-14 20:01:37.996568521 +0000 UTC m=+120.169674082" observedRunningTime="2025-10-14 20:01:38.885722753 +0000 UTC m=+121.058828305" watchObservedRunningTime="2025-10-14 20:01:38.886147727 +0000 UTC m=+121.059253288"
	Oct 14 20:01:44 addons-864166 kubelet[1310]: I1014 20:01:44.082628    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-svw4p" podStartSLOduration=3.631556692 podStartE2EDuration="1m19.082603555s" podCreationTimestamp="2025-10-14 20:00:25 +0000 UTC" firstStartedPulling="2025-10-14 20:00:26.342565851 +0000 UTC m=+48.515671403" lastFinishedPulling="2025-10-14 20:01:41.793612713 +0000 UTC m=+123.966718266" observedRunningTime="2025-10-14 20:01:41.9199573 +0000 UTC m=+124.093062869" watchObservedRunningTime="2025-10-14 20:01:44.082603555 +0000 UTC m=+126.255709116"
	Oct 14 20:01:44 addons-864166 kubelet[1310]: I1014 20:01:44.359511    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tcq6\" (UniqueName: \"kubernetes.io/projected/14edcf26-2792-4adf-8b7b-e4965d42ac66-kube-api-access-4tcq6\") pod \"busybox\" (UID: \"14edcf26-2792-4adf-8b7b-e4965d42ac66\") " pod="default/busybox"
	Oct 14 20:01:44 addons-864166 kubelet[1310]: W1014 20:01:44.622615    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7451099bfbc41feb339304668d302d413eb146e933e638dfa5e77237a964fd90/crio-97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461 WatchSource:0}: Error finding container 97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461: Status 404 returned error can't find the container with id 97de4626f0493b4e6a9c717ad3209d74ae9513ca9a6ea854f3a52369107d8461
	Oct 14 20:01:45 addons-864166 kubelet[1310]: I1014 20:01:45.992383    1310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967361ad-ae60-4c21-8247-063f27e00161" path="/var/lib/kubelet/pods/967361ad-ae60-4c21-8247-063f27e00161/volumes"
	Oct 14 20:01:46 addons-864166 kubelet[1310]: I1014 20:01:46.933372    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.965363496 podStartE2EDuration="2.933350331s" podCreationTimestamp="2025-10-14 20:01:44 +0000 UTC" firstStartedPulling="2025-10-14 20:01:44.625646227 +0000 UTC m=+126.798751780" lastFinishedPulling="2025-10-14 20:01:46.593633062 +0000 UTC m=+128.766738615" observedRunningTime="2025-10-14 20:01:46.932719603 +0000 UTC m=+129.105825155" watchObservedRunningTime="2025-10-14 20:01:46.933350331 +0000 UTC m=+129.106455884"
	Oct 14 20:01:48 addons-864166 kubelet[1310]: I1014 20:01:48.990481    1310 scope.go:117] "RemoveContainer" containerID="ec347adf1bf172832abb8f56f6a4b0b990fff9cc4d4b485a93e8ca69a2f5cdda"
	Oct 14 20:01:48 addons-864166 kubelet[1310]: E1014 20:01:48.990692    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 40s restarting failed container=patch pod=gcp-auth-certs-patch-5gztv_gcp-auth(5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb)\"" pod="gcp-auth/gcp-auth-certs-patch-5gztv" podUID="5fb0a30b-c831-4d6c-a6e8-3c3d86c143eb"
	
	
	==> storage-provisioner [a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd] <==
	W1014 20:01:32.990726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:34.994352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:34.998887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:37.003409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:37.011573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:39.014848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:39.025657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:41.028946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:41.037145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:43.040994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:43.045959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:45.050552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:45.056500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:47.060494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:47.067935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:49.070923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:49.082595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:51.093994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:51.102427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:53.106376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:53.112895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:55.116183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:55.124412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:57.134129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:01:57.144628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-864166 -n addons-864166
helpers_test.go:269: (dbg) Run:  kubectl --context addons-864166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p registry-creds-764b6fb674-6rp57
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p registry-creds-764b6fb674-6rp57
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p registry-creds-764b6fb674-6rp57: exit status 1 (85.011517ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fzdhr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hmv8p" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6rp57" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-864166 describe pod ingress-nginx-admission-create-fzdhr ingress-nginx-admission-patch-hmv8p registry-creds-764b6fb674-6rp57: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable headlamp --alsologtostderr -v=1: exit status 11 (262.416202ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:01:58.453622  554756 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:01:58.454408  554756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:58.454445  554756 out.go:374] Setting ErrFile to fd 2...
	I1014 20:01:58.454466  554756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:58.454754  554756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:01:58.455081  554756 mustload.go:65] Loading cluster: addons-864166
	I1014 20:01:58.455511  554756 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:58.455556  554756 addons.go:606] checking whether the cluster is paused
	I1014 20:01:58.455687  554756 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:01:58.455728  554756 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:01:58.456220  554756 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:01:58.474399  554756 ssh_runner.go:195] Run: systemctl --version
	I1014 20:01:58.474452  554756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:01:58.493202  554756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:01:58.594991  554756 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:01:58.595082  554756 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:01:58.630619  554756 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:01:58.630637  554756 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:01:58.630642  554756 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:01:58.630646  554756 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:01:58.630649  554756 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:01:58.630652  554756 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:01:58.630655  554756 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:01:58.630658  554756 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:01:58.630661  554756 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:01:58.630667  554756 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:01:58.630670  554756 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:01:58.630673  554756 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:01:58.630676  554756 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:01:58.630679  554756 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:01:58.630682  554756 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:01:58.630687  554756 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:01:58.630691  554756 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:01:58.630694  554756 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:01:58.630697  554756 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:01:58.630700  554756 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:01:58.630705  554756 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:01:58.630708  554756 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:01:58.630711  554756 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:01:58.630714  554756 cri.go:89] found id: ""
	I1014 20:01:58.630765  554756 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:01:58.647542  554756 out.go:203] 
	W1014 20:01:58.650549  554756 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:01:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:01:58.650578  554756 out.go:285] * 
	* 
	W1014 20:01:58.658004  554756 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:01:58.660898  554756 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-gz4h4" [d49fcea9-86a6-4eae-8a58-c04df87ce50d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005209659s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (274.32796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:03:18.003333  556750 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:18.004197  556750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:18.004246  556750 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:18.004253  556750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:18.004754  556750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:03:18.005250  556750 mustload.go:65] Loading cluster: addons-864166
	I1014 20:03:18.005799  556750 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:18.005819  556750 addons.go:606] checking whether the cluster is paused
	I1014 20:03:18.005970  556750 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:18.005991  556750 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:03:18.006679  556750 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:03:18.024442  556750 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:18.024503  556750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:03:18.046280  556750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:03:18.155729  556750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:18.155829  556750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:18.188687  556750 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:03:18.188718  556750 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:03:18.188724  556750 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:03:18.188729  556750 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:03:18.188732  556750 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:03:18.188736  556750 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:03:18.188739  556750 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:03:18.188742  556750 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:03:18.188745  556750 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:03:18.188778  556750 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:03:18.188782  556750 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:03:18.188785  556750 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:03:18.188789  556750 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:03:18.188792  556750 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:03:18.188796  556750 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:03:18.188801  556750 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:03:18.188808  556750 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:03:18.188813  556750 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:03:18.188816  556750 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:03:18.188819  556750 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:03:18.188825  556750 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:03:18.188828  556750 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:03:18.188848  556750 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:03:18.188856  556750 cri.go:89] found id: ""
	I1014 20:03:18.188922  556750 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:03:18.203880  556750 out.go:203] 
	W1014 20:03:18.206805  556750 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:03:18.206833  556750 out.go:285] * 
	* 
	W1014 20:03:18.214042  556750 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:03:18.217188  556750 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-864166 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-864166 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [08e4a9b6-5169-424c-950a-eee199d55833] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [08e4a9b6-5169-424c-950a-eee199d55833] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [08e4a9b6-5169-424c-950a-eee199d55833] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.014927505s
addons_test.go:967: (dbg) Run:  kubectl --context addons-864166 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 ssh "cat /opt/local-path-provisioner/pvc-3acf2ef9-374e-4fc8-a1d8-3c1f2c4c5867_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-864166 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-864166 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (340.39165ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:03:01.115887  556508 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:01.117066  556508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:01.117089  556508 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:01.117095  556508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:01.117604  556508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:03:01.117976  556508 mustload.go:65] Loading cluster: addons-864166
	I1014 20:03:01.118401  556508 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:01.118414  556508 addons.go:606] checking whether the cluster is paused
	I1014 20:03:01.118523  556508 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:01.118537  556508 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:03:01.119029  556508 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:03:01.136908  556508 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:01.137017  556508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:03:01.164296  556508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:03:01.274045  556508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:01.274135  556508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:01.320092  556508 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:03:01.320155  556508 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:03:01.320174  556508 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:03:01.320193  556508 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:03:01.320211  556508 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:03:01.320232  556508 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:03:01.320254  556508 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:03:01.320277  556508 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:03:01.320298  556508 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:03:01.320371  556508 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:03:01.320401  556508 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:03:01.320419  556508 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:03:01.320438  556508 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:03:01.320458  556508 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:03:01.320484  556508 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:03:01.320513  556508 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:03:01.320543  556508 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:03:01.320563  556508 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:03:01.320582  556508 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:03:01.320601  556508 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:03:01.320621  556508 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:03:01.320625  556508 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:03:01.320628  556508 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:03:01.320632  556508 cri.go:89] found id: ""
	I1014 20:03:01.320689  556508 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:03:01.340952  556508 out.go:203] 
	W1014 20:03:01.344644  556508 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:03:01.344674  556508 out.go:285] * 
	* 
	W1014 20:03:01.351842  556508 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:03:01.355492  556508 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-cgtft" [174ba8d1-a219-4e80-9816-0f60ee316d62] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003784202s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (298.296183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:03:12.685402  556671 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:12.686315  556671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:12.686329  556671 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:12.686335  556671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:12.686635  556671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:03:12.686935  556671 mustload.go:65] Loading cluster: addons-864166
	I1014 20:03:12.687327  556671 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:12.687344  556671 addons.go:606] checking whether the cluster is paused
	I1014 20:03:12.687442  556671 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:12.687463  556671 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:03:12.687898  556671 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:03:12.712707  556671 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:12.712776  556671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:03:12.739968  556671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:03:12.847094  556671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:12.847208  556671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:12.900677  556671 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:03:12.900702  556671 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:03:12.900707  556671 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:03:12.900716  556671 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:03:12.900720  556671 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:03:12.900724  556671 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:03:12.900727  556671 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:03:12.900730  556671 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:03:12.900733  556671 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:03:12.900740  556671 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:03:12.900744  556671 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:03:12.900748  556671 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:03:12.900751  556671 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:03:12.900755  556671 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:03:12.900759  556671 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:03:12.900765  556671 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:03:12.900771  556671 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:03:12.900775  556671 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:03:12.900778  556671 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:03:12.900781  556671 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:03:12.900786  556671 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:03:12.900789  556671 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:03:12.900792  556671 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:03:12.900795  556671 cri.go:89] found id: ""
	I1014 20:03:12.900853  556671 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:03:12.916138  556671 out.go:203] 
	W1014 20:03:12.919081  556671 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:03:12.919110  556671 out.go:285] * 
	* 
	W1014 20:03:12.926328  556671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:03:12.929231  556671 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rfjnt" [e70d80f9-2825-405c-88d1-ea995b23eea0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003733254s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-864166 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864166 addons disable yakd --alsologtostderr -v=1: exit status 11 (260.40208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:03:07.423953  556612 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:07.424919  556612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:07.424943  556612 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:07.424950  556612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:07.425280  556612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:03:07.425642  556612 mustload.go:65] Loading cluster: addons-864166
	I1014 20:03:07.426024  556612 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:07.426041  556612 addons.go:606] checking whether the cluster is paused
	I1014 20:03:07.426152  556612 config.go:182] Loaded profile config "addons-864166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:07.426171  556612 host.go:66] Checking if "addons-864166" exists ...
	I1014 20:03:07.426644  556612 cli_runner.go:164] Run: docker container inspect addons-864166 --format={{.State.Status}}
	I1014 20:03:07.444892  556612 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:07.444960  556612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864166
	I1014 20:03:07.462479  556612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33507 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/addons-864166/id_rsa Username:docker}
	I1014 20:03:07.562957  556612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:07.563038  556612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:07.592850  556612 cri.go:89] found id: "78ef6dffdb260387212a93303124e048ee15ae3f59fe9e9bd35468b8d107084b"
	I1014 20:03:07.592923  556612 cri.go:89] found id: "3ed114fa3e5da1f56e85f7fdeb97a771d558093b848ce41b26b0190620cf6a24"
	I1014 20:03:07.592957  556612 cri.go:89] found id: "fc727586dde63cef131e5e2dfdd8994cdc8ea730335a4f871bba2a005befe4e8"
	I1014 20:03:07.592991  556612 cri.go:89] found id: "7ef76141e924284d99fe6357b0cf6e9a9814008d19175065a40cde5b8528e5eb"
	I1014 20:03:07.593013  556612 cri.go:89] found id: "c5c1390f4e419d576d9abf3bc94f4f415d5506cd18519bbef0ed92f76dcb8563"
	I1014 20:03:07.593030  556612 cri.go:89] found id: "f351a573847cd029abb7fab37c1b8dc5b3394d4b549cabc0ca60a3397fc69b5a"
	I1014 20:03:07.593049  556612 cri.go:89] found id: "6b79820d7c3ccb688d9ce73d50754490d90aed59de1d994b1ea5f51e0ef0ce41"
	I1014 20:03:07.593080  556612 cri.go:89] found id: "5c84e205968bfa0f86e3149ec248c3043ee206110d9dd9cae1c33f560650623f"
	I1014 20:03:07.593103  556612 cri.go:89] found id: "bb4fccdbcf0f5935eade34108ffaaeb4808ebf3729950f90af52693ef750035d"
	I1014 20:03:07.593126  556612 cri.go:89] found id: "3ca6b7f2cc2b00918861fa8884e5be14bedcb23e8716f7e52f25cedb6f1c315b"
	I1014 20:03:07.593162  556612 cri.go:89] found id: "6e40d4e7135a05eeab04f2727c8cb3b945e0f7b35864c83e20e72da10b93b9db"
	I1014 20:03:07.593186  556612 cri.go:89] found id: "3e1c1eae40cfd23976decfe14caca8cae08ad49efaea19f05efbabe9e41e3687"
	I1014 20:03:07.593205  556612 cri.go:89] found id: "8b466c8617b98ff2a65c993bf79bfb679961f4698a8118de615eca5609bee49c"
	I1014 20:03:07.593239  556612 cri.go:89] found id: "b80cf57eb9e2bee53026bb3cbe3b0443bc8a1032d1a442cdcd330a87563f6d9f"
	I1014 20:03:07.593263  556612 cri.go:89] found id: "a327e10ba76231afd828f288b7c929804d2fa432f0c2fb5d440b6d4cbfc09050"
	I1014 20:03:07.593293  556612 cri.go:89] found id: "7d43ec8e4714ec16f6fd9c499d2855be2b62082b41c665e68fd2ff0150fb37f0"
	I1014 20:03:07.593335  556612 cri.go:89] found id: "a97b20dece5cef1c39d364f190fe8456df7716b92bb6f6df215eb010bcf02fbd"
	I1014 20:03:07.593361  556612 cri.go:89] found id: "42ea884de055b59b2ffed5e7b9d28454868620756a73d3280a1aa24fb121dc9d"
	I1014 20:03:07.593380  556612 cri.go:89] found id: "85281702b49614b94db01ce11d8ea99610d57489ec6345063e8f199a3d311e3c"
	I1014 20:03:07.593416  556612 cri.go:89] found id: "b10c05c5f1576c84d6e7be3e027ef408b730a5606775cc99dc050a8b46cf99fe"
	I1014 20:03:07.593441  556612 cri.go:89] found id: "9a2e8bf9aef9f2ba07816aca3818d22c434fd19e119b2383aa10a93919d3ece5"
	I1014 20:03:07.593460  556612 cri.go:89] found id: "e4d0d5d58460a7cf4d55d124ee5894e17b05fee1741e46856247cdb064c0b542"
	I1014 20:03:07.593477  556612 cri.go:89] found id: "2db14186188f011755f8c1390f57aedc39076f38c3d4a6a2c599a1788e75fa6b"
	I1014 20:03:07.593519  556612 cri.go:89] found id: ""
	I1014 20:03:07.593606  556612 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 20:03:07.609410  556612 out.go:203] 
	W1014 20:03:07.612307  556612 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:03:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 20:03:07.612370  556612 out.go:285] * 
	* 
	W1014 20:03:07.619665  556612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:03:07.622840  556612 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-864166 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestForceSystemdFlag (519.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-162846 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-162846 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m35.641810118s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-162846] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-162846" primary control-plane node in "force-systemd-flag-162846" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:03:12.562437  710209 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:03:12.562586  710209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:03:12.562598  710209 out.go:374] Setting ErrFile to fd 2...
	I1014 21:03:12.562604  710209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:03:12.562874  710209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:03:12.563303  710209 out.go:368] Setting JSON to false
	I1014 21:03:12.564244  710209 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13543,"bootTime":1760462250,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:03:12.564356  710209 start.go:141] virtualization:  
	I1014 21:03:12.568206  710209 out.go:179] * [force-systemd-flag-162846] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:03:12.572963  710209 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:03:12.573082  710209 notify.go:220] Checking for updates...
	I1014 21:03:12.579845  710209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:03:12.583172  710209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:03:12.586512  710209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:03:12.589605  710209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:03:12.592807  710209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:03:12.596529  710209 config.go:182] Loaded profile config "kubernetes-upgrade-791601": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:03:12.596667  710209 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:03:12.625385  710209 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:03:12.625536  710209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:03:12.682723  710209 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:03:12.672931085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:03:12.682839  710209 docker.go:318] overlay module found
	I1014 21:03:12.686063  710209 out.go:179] * Using the docker driver based on user configuration
	I1014 21:03:12.689032  710209 start.go:305] selected driver: docker
	I1014 21:03:12.689057  710209 start.go:925] validating driver "docker" against <nil>
	I1014 21:03:12.689072  710209 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:03:12.689814  710209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:03:12.753084  710209 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:03:12.74394752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:03:12.753261  710209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:03:12.753487  710209 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 21:03:12.756638  710209 out.go:179] * Using Docker driver with root privileges
	I1014 21:03:12.759757  710209 cni.go:84] Creating CNI manager for ""
	I1014 21:03:12.759831  710209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:03:12.759846  710209 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:03:12.759926  710209 start.go:349] cluster config:
	{Name:force-systemd-flag-162846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-162846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:03:12.763168  710209 out.go:179] * Starting "force-systemd-flag-162846" primary control-plane node in "force-systemd-flag-162846" cluster
	I1014 21:03:12.766006  710209 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:03:12.768957  710209 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:03:12.771790  710209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:03:12.771866  710209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:03:12.771903  710209 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:03:12.771918  710209 cache.go:58] Caching tarball of preloaded images
	I1014 21:03:12.771999  710209 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:03:12.772015  710209 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:03:12.772125  710209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/config.json ...
	I1014 21:03:12.772147  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/config.json: {Name:mkdafa6e81449b77eb2cbf41bec8334f87e020be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:12.791865  710209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:03:12.791896  710209 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:03:12.791912  710209 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:03:12.791934  710209 start.go:360] acquireMachinesLock for force-systemd-flag-162846: {Name:mkaf806286f427cd6c6d9acdd20f928b0b032b34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:03:12.792047  710209 start.go:364] duration metric: took 90.75µs to acquireMachinesLock for "force-systemd-flag-162846"
	I1014 21:03:12.792084  710209 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-162846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-162846 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:03:12.792148  710209 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:03:12.795792  710209 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:03:12.796019  710209 start.go:159] libmachine.API.Create for "force-systemd-flag-162846" (driver="docker")
	I1014 21:03:12.796063  710209 client.go:168] LocalClient.Create starting
	I1014 21:03:12.796134  710209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:03:12.796177  710209 main.go:141] libmachine: Decoding PEM data...
	I1014 21:03:12.796195  710209 main.go:141] libmachine: Parsing certificate...
	I1014 21:03:12.796248  710209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:03:12.796268  710209 main.go:141] libmachine: Decoding PEM data...
	I1014 21:03:12.796285  710209 main.go:141] libmachine: Parsing certificate...
	I1014 21:03:12.796735  710209 cli_runner.go:164] Run: docker network inspect force-systemd-flag-162846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:03:12.813014  710209 cli_runner.go:211] docker network inspect force-systemd-flag-162846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:03:12.813096  710209 network_create.go:284] running [docker network inspect force-systemd-flag-162846] to gather additional debugging logs...
	I1014 21:03:12.813121  710209 cli_runner.go:164] Run: docker network inspect force-systemd-flag-162846
	W1014 21:03:12.829511  710209 cli_runner.go:211] docker network inspect force-systemd-flag-162846 returned with exit code 1
	I1014 21:03:12.829542  710209 network_create.go:287] error running [docker network inspect force-systemd-flag-162846]: docker network inspect force-systemd-flag-162846: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-162846 not found
	I1014 21:03:12.829577  710209 network_create.go:289] output of [docker network inspect force-systemd-flag-162846]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-162846 not found
	
	** /stderr **
	I1014 21:03:12.829682  710209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:03:12.846368  710209 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:03:12.846675  710209 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:03:12.846978  710209 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:03:12.847416  710209 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dac70}
	I1014 21:03:12.847440  710209 network_create.go:124] attempt to create docker network force-systemd-flag-162846 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:03:12.847495  710209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-162846 force-systemd-flag-162846
	I1014 21:03:12.901517  710209 network_create.go:108] docker network force-systemd-flag-162846 192.168.76.0/24 created
	I1014 21:03:12.901551  710209 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-162846" container
	I1014 21:03:12.901663  710209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:03:12.918042  710209 cli_runner.go:164] Run: docker volume create force-systemd-flag-162846 --label name.minikube.sigs.k8s.io=force-systemd-flag-162846 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:03:12.938665  710209 oci.go:103] Successfully created a docker volume force-systemd-flag-162846
	I1014 21:03:12.938758  710209 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-162846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-162846 --entrypoint /usr/bin/test -v force-systemd-flag-162846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:03:13.502757  710209 oci.go:107] Successfully prepared a docker volume force-systemd-flag-162846
	I1014 21:03:13.502816  710209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:03:13.502836  710209 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:03:13.502915  710209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-162846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:03:18.171563  710209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-162846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.66860427s)
	I1014 21:03:18.171592  710209 kic.go:203] duration metric: took 4.668753925s to extract preloaded images to volume ...
	W1014 21:03:18.171731  710209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:03:18.171839  710209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:03:18.266568  710209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-162846 --name force-systemd-flag-162846 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-162846 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-162846 --network force-systemd-flag-162846 --ip 192.168.76.2 --volume force-systemd-flag-162846:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:03:18.613282  710209 cli_runner.go:164] Run: docker container inspect force-systemd-flag-162846 --format={{.State.Running}}
	I1014 21:03:18.638275  710209 cli_runner.go:164] Run: docker container inspect force-systemd-flag-162846 --format={{.State.Status}}
	I1014 21:03:18.666417  710209 cli_runner.go:164] Run: docker exec force-systemd-flag-162846 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:03:18.729549  710209 oci.go:144] the created container "force-systemd-flag-162846" has a running status.
	I1014 21:03:18.729588  710209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa...
	I1014 21:03:19.245328  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 21:03:19.245425  710209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:03:19.268122  710209 cli_runner.go:164] Run: docker container inspect force-systemd-flag-162846 --format={{.State.Status}}
	I1014 21:03:19.298310  710209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:03:19.298334  710209 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-162846 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:03:19.385842  710209 cli_runner.go:164] Run: docker container inspect force-systemd-flag-162846 --format={{.State.Status}}
	I1014 21:03:19.421474  710209 machine.go:93] provisionDockerMachine start ...
	I1014 21:03:19.421564  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:19.441651  710209 main.go:141] libmachine: Using SSH client type: native
	I1014 21:03:19.441985  710209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1014 21:03:19.442007  710209 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:03:19.442583  710209 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41892->127.0.0.1:33762: read: connection reset by peer
	I1014 21:03:22.601340  710209 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-162846
	
	I1014 21:03:22.601366  710209 ubuntu.go:182] provisioning hostname "force-systemd-flag-162846"
	I1014 21:03:22.601428  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:22.628632  710209 main.go:141] libmachine: Using SSH client type: native
	I1014 21:03:22.628950  710209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1014 21:03:22.628968  710209 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-162846 && echo "force-systemd-flag-162846" | sudo tee /etc/hostname
	I1014 21:03:22.793743  710209 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-162846
	
	I1014 21:03:22.793827  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:22.811543  710209 main.go:141] libmachine: Using SSH client type: native
	I1014 21:03:22.811862  710209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1014 21:03:22.811886  710209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-162846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-162846/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-162846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:03:22.964646  710209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:03:22.964676  710209 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:03:22.964718  710209 ubuntu.go:190] setting up certificates
	I1014 21:03:22.964737  710209 provision.go:84] configureAuth start
	I1014 21:03:22.964815  710209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-162846
	I1014 21:03:22.981671  710209 provision.go:143] copyHostCerts
	I1014 21:03:22.981716  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:03:22.981753  710209 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:03:22.981765  710209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:03:22.981858  710209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:03:22.981955  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:03:22.981980  710209 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:03:22.981988  710209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:03:22.982016  710209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:03:22.982074  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:03:22.982095  710209 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:03:22.982100  710209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:03:22.982127  710209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:03:22.982187  710209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-162846 san=[127.0.0.1 192.168.76.2 force-systemd-flag-162846 localhost minikube]
	I1014 21:03:23.820395  710209 provision.go:177] copyRemoteCerts
	I1014 21:03:23.820470  710209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:03:23.820521  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:23.846042  710209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa Username:docker}
	I1014 21:03:23.966339  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 21:03:23.966399  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:03:23.998604  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 21:03:23.998670  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1014 21:03:24.032303  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 21:03:24.032390  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:03:24.062870  710209 provision.go:87] duration metric: took 1.098104457s to configureAuth
	I1014 21:03:24.062896  710209 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:03:24.063093  710209 config.go:182] Loaded profile config "force-systemd-flag-162846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:03:24.063202  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:24.087890  710209 main.go:141] libmachine: Using SSH client type: native
	I1014 21:03:24.088205  710209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1014 21:03:24.088225  710209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:03:24.463661  710209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:03:24.463688  710209 machine.go:96] duration metric: took 5.042192524s to provisionDockerMachine
	I1014 21:03:24.463703  710209 client.go:171] duration metric: took 11.667634009s to LocalClient.Create
	I1014 21:03:24.463717  710209 start.go:167] duration metric: took 11.667700669s to libmachine.API.Create "force-systemd-flag-162846"
	I1014 21:03:24.463724  710209 start.go:293] postStartSetup for "force-systemd-flag-162846" (driver="docker")
	I1014 21:03:24.463734  710209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:03:24.463813  710209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:03:24.463858  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:24.497833  710209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa Username:docker}
	I1014 21:03:24.627138  710209 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:03:24.630900  710209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:03:24.630932  710209 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:03:24.630944  710209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:03:24.631009  710209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:03:24.631109  710209 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:03:24.631124  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 21:03:24.631226  710209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:03:24.646607  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:03:24.678208  710209 start.go:296] duration metric: took 214.468544ms for postStartSetup
	I1014 21:03:24.678581  710209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-162846
	I1014 21:03:24.703973  710209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/config.json ...
	I1014 21:03:24.704267  710209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:03:24.704337  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:24.736573  710209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa Username:docker}
	I1014 21:03:24.856847  710209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:03:24.864902  710209 start.go:128] duration metric: took 12.072740023s to createHost
	I1014 21:03:24.864929  710209 start.go:83] releasing machines lock for "force-systemd-flag-162846", held for 12.07286663s
	I1014 21:03:24.865008  710209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-162846
	I1014 21:03:24.885924  710209 ssh_runner.go:195] Run: cat /version.json
	I1014 21:03:24.885994  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:24.886224  710209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:03:24.886281  710209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-162846
	I1014 21:03:24.923228  710209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa Username:docker}
	I1014 21:03:24.928908  710209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-flag-162846/id_rsa Username:docker}
	I1014 21:03:25.183101  710209 ssh_runner.go:195] Run: systemctl --version
	I1014 21:03:25.189928  710209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:03:25.250009  710209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:03:25.257597  710209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:03:25.257672  710209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:03:25.299884  710209 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:03:25.299908  710209 start.go:495] detecting cgroup driver to use...
	I1014 21:03:25.299922  710209 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1014 21:03:25.299977  710209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:03:25.329016  710209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:03:25.350962  710209 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:03:25.351028  710209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:03:25.382195  710209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:03:25.413419  710209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:03:25.624696  710209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:03:25.826254  710209 docker.go:234] disabling docker service ...
	I1014 21:03:25.826321  710209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:03:25.875807  710209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:03:25.898509  710209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:03:26.110122  710209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:03:26.328744  710209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:03:26.343548  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:03:26.369628  710209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:03:26.369697  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.385673  710209 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 21:03:26.385745  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.398170  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.409427  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.428844  710209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:03:26.442125  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.459241  710209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.481646  710209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:03:26.493618  710209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:03:26.504381  710209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:03:26.516871  710209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:03:26.715335  710209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:03:26.935560  710209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:03:26.935635  710209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:03:26.949019  710209 start.go:563] Will wait 60s for crictl version
	I1014 21:03:26.949093  710209 ssh_runner.go:195] Run: which crictl
	I1014 21:03:26.953184  710209 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:03:26.998662  710209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:03:26.998828  710209 ssh_runner.go:195] Run: crio --version
	I1014 21:03:27.043074  710209 ssh_runner.go:195] Run: crio --version
	I1014 21:03:27.102053  710209 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:03:27.105187  710209 cli_runner.go:164] Run: docker network inspect force-systemd-flag-162846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:03:27.130003  710209 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:03:27.134255  710209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:03:27.146423  710209 kubeadm.go:883] updating cluster {Name:force-systemd-flag-162846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-162846 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:03:27.146529  710209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:03:27.146585  710209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:03:27.215215  710209 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:03:27.215237  710209 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:03:27.215293  710209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:03:27.270672  710209 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:03:27.270745  710209 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:03:27.270769  710209 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:03:27.270897  710209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-162846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-162846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:03:27.271022  710209 ssh_runner.go:195] Run: crio config
	I1014 21:03:27.367907  710209 cni.go:84] Creating CNI manager for ""
	I1014 21:03:27.367930  710209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:03:27.367949  710209 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:03:27.367972  710209 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-162846 NodeName:force-systemd-flag-162846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:03:27.368104  710209 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-162846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:03:27.368183  710209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:03:27.382627  710209 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:03:27.382696  710209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:03:27.397328  710209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1014 21:03:27.420478  710209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:03:27.442707  710209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1014 21:03:27.456884  710209 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:03:27.461819  710209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:03:27.471908  710209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:03:27.660885  710209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:03:27.688860  710209 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846 for IP: 192.168.76.2
	I1014 21:03:27.688931  710209 certs.go:195] generating shared ca certs ...
	I1014 21:03:27.688962  710209 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:27.689162  710209 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:03:27.689250  710209 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:03:27.689278  710209 certs.go:257] generating profile certs ...
	I1014 21:03:27.689365  710209 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.key
	I1014 21:03:27.689415  710209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.crt with IP's: []
	I1014 21:03:28.054354  710209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.crt ...
	I1014 21:03:28.054428  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.crt: {Name:mkb1c480398c2ab8759d5c825d651761d79416d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.054664  710209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.key ...
	I1014 21:03:28.054676  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/client.key: {Name:mk8f9f8bb1369459f2ad35657f28981a89321997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.054763  710209 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key.1a54e066
	I1014 21:03:28.054777  710209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt.1a54e066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1014 21:03:28.250735  710209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt.1a54e066 ...
	I1014 21:03:28.250814  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt.1a54e066: {Name:mk4365b7b712c24a3b04b9fd59f9f9b298de6c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.251068  710209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key.1a54e066 ...
	I1014 21:03:28.251109  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key.1a54e066: {Name:mk73bf01d623213acbec344b062b312f85819d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.251270  710209 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt.1a54e066 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt
	I1014 21:03:28.251404  710209 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key.1a54e066 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key
	I1014 21:03:28.251533  710209 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.key
	I1014 21:03:28.251556  710209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.crt with IP's: []
	I1014 21:03:28.391629  710209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.crt ...
	I1014 21:03:28.391706  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.crt: {Name:mkd1670a94811ff7cf4c5c97cbfa25c32144d200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.391939  710209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.key ...
	I1014 21:03:28.391975  710209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.key: {Name:mke1db5591d6478786e5c9c36175edddc4801042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:03:28.392116  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 21:03:28.392168  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 21:03:28.392198  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 21:03:28.392241  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 21:03:28.392279  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 21:03:28.392355  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 21:03:28.392377  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 21:03:28.392389  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 21:03:28.392441  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:03:28.392477  710209 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:03:28.392485  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:03:28.392509  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:03:28.392533  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:03:28.392555  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:03:28.392595  710209 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:03:28.392622  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:03:28.392635  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 21:03:28.392649  710209 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 21:03:28.393171  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:03:28.429361  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:03:28.461885  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:03:28.486885  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:03:28.510818  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 21:03:28.560817  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:03:28.606424  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:03:28.679870  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-flag-162846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:03:28.698568  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:03:28.718286  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:03:28.736503  710209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:03:28.754388  710209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:03:28.785952  710209 ssh_runner.go:195] Run: openssl version
	I1014 21:03:28.797769  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:03:28.809480  710209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:03:28.814110  710209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:03:28.814311  710209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:03:28.868619  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:03:28.877324  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:03:28.889168  710209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:03:28.896979  710209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:03:28.897099  710209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:03:28.944069  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:03:28.952923  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:03:28.961581  710209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:03:28.969051  710209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:03:28.969200  710209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:03:29.022925  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:03:29.039107  710209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:03:29.043947  710209 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:03:29.043999  710209 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-162846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-162846 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:03:29.044067  710209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:03:29.044136  710209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:03:29.095405  710209 cri.go:89] found id: ""
	I1014 21:03:29.095476  710209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:03:29.108418  710209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:03:29.121547  710209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:03:29.121663  710209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:03:29.133588  710209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:03:29.133660  710209 kubeadm.go:157] found existing configuration files:
	
	I1014 21:03:29.133752  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:03:29.144915  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:03:29.145032  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:03:29.155659  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:03:29.169983  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:03:29.170098  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:03:29.182517  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:03:29.195455  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:03:29.195582  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:03:29.206625  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:03:29.215202  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:03:29.215317  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:03:29.231457  710209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:03:29.299757  710209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:03:29.300688  710209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:03:29.360717  710209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:03:29.360880  710209 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:03:29.360956  710209 kubeadm.go:318] OS: Linux
	I1014 21:03:29.361068  710209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:03:29.361153  710209 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:03:29.361241  710209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:03:29.361343  710209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:03:29.361432  710209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:03:29.361522  710209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:03:29.361640  710209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:03:29.361732  710209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:03:29.361811  710209 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:03:29.526684  710209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:03:29.526868  710209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:03:29.527009  710209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:03:29.538166  710209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:03:29.544645  710209 out.go:252]   - Generating certificates and keys ...
	I1014 21:03:29.544821  710209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:03:29.544944  710209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:03:30.471872  710209 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:03:32.034703  710209 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:03:32.247182  710209 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:03:32.609193  710209 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:03:33.924440  710209 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:03:33.924893  710209 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:03:34.235236  710209 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:03:34.235689  710209 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:03:34.502252  710209 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:03:35.264874  710209 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:03:37.444772  710209 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:03:37.444854  710209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:03:38.578435  710209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:03:39.574904  710209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:03:40.038407  710209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:03:40.536727  710209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:03:41.302544  710209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:03:41.303971  710209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:03:41.315950  710209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:03:41.320149  710209 out.go:252]   - Booting up control plane ...
	I1014 21:03:41.322108  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:03:41.324016  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:03:41.330155  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:03:41.360587  710209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:03:41.360697  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:03:41.375350  710209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:03:41.381082  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:03:41.381373  710209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:03:41.597870  710209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:03:41.597995  710209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:03:43.099279  710209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501978896s
	I1014 21:03:43.103159  710209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:03:43.103261  710209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1014 21:03:43.103572  710209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:03:43.103658  710209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:07:43.103676  710209 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000439994s
	I1014 21:07:43.103826  710209 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000025391s
	I1014 21:07:43.104090  710209 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000999755s
	I1014 21:07:43.104245  710209 kubeadm.go:318] 
	I1014 21:07:43.104370  710209 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:07:43.104459  710209 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:07:43.104555  710209 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:07:43.104816  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:07:43.104902  710209 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:07:43.104984  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:07:43.104989  710209 kubeadm.go:318] 
	I1014 21:07:43.109030  710209 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:07:43.109290  710209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:07:43.109404  710209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:07:43.110028  710209 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	W1014 21:07:43.110236  710209 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501978896s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000439994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000025391s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000999755s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501978896s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000439994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000025391s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000999755s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 21:07:43.110315  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 21:07:43.110647  710209 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:07:43.656042  710209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:07:43.670120  710209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:07:43.670191  710209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:07:43.678813  710209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:07:43.678843  710209 kubeadm.go:157] found existing configuration files:
	
	I1014 21:07:43.678900  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:07:43.686947  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:07:43.687009  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:07:43.694635  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:07:43.702645  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:07:43.702709  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:07:43.710378  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:07:43.718472  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:07:43.718535  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:07:43.725735  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:07:43.735047  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:07:43.735110  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:07:43.742470  710209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:07:43.794524  710209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:07:43.794884  710209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:07:43.821460  710209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:07:43.821548  710209 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:07:43.821598  710209 kubeadm.go:318] OS: Linux
	I1014 21:07:43.821663  710209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:07:43.821737  710209 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:07:43.821802  710209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:07:43.821862  710209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:07:43.821923  710209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:07:43.821997  710209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:07:43.822067  710209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:07:43.822133  710209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:07:43.822197  710209 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:07:43.889082  710209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:07:43.889198  710209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:07:43.889299  710209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:07:43.896260  710209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:07:43.901268  710209 out.go:252]   - Generating certificates and keys ...
	I1014 21:07:43.901398  710209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:07:43.901499  710209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:07:43.901599  710209 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 21:07:43.901696  710209 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 21:07:43.901787  710209 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 21:07:43.901858  710209 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 21:07:43.901951  710209 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 21:07:43.902036  710209 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 21:07:43.902146  710209 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 21:07:43.902238  710209 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 21:07:43.902288  710209 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 21:07:43.902355  710209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:07:44.070649  710209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:07:44.564811  710209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:07:45.219514  710209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:07:46.061857  710209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:07:46.433987  710209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:07:46.434680  710209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:07:46.437611  710209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:07:46.441901  710209 out.go:252]   - Booting up control plane ...
	I1014 21:07:46.442002  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:07:46.442084  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:07:46.443058  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:07:46.462479  710209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:07:46.462783  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:07:46.471001  710209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:07:46.471115  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:07:46.471166  710209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:07:46.604443  710209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:07:46.604589  710209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:07:47.605020  710209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000938261s
	I1014 21:07:47.608731  710209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:07:47.608862  710209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1014 21:07:47.608997  710209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:07:47.609097  710209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:11:47.609755  710209 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	I1014 21:11:47.610069  710209 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	I1014 21:11:47.610188  710209 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	I1014 21:11:47.610272  710209 kubeadm.go:318] 
	I1014 21:11:47.610393  710209 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:11:47.610506  710209 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:11:47.610603  710209 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:11:47.610766  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:11:47.610846  710209 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:11:47.610956  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:11:47.610962  710209 kubeadm.go:318] 
	I1014 21:11:47.615630  710209 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:11:47.615901  710209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:11:47.616023  710209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:11:47.616674  710209 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 21:11:47.616764  710209 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:11:47.616843  710209 kubeadm.go:402] duration metric: took 8m18.572847211s to StartCluster
	I1014 21:11:47.616897  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:11:47.616970  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:11:47.643132  710209 cri.go:89] found id: ""
	I1014 21:11:47.643168  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.643177  710209 logs.go:284] No container was found matching "kube-apiserver"
	I1014 21:11:47.643184  710209 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:11:47.643242  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:11:47.672697  710209 cri.go:89] found id: ""
	I1014 21:11:47.672726  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.672735  710209 logs.go:284] No container was found matching "etcd"
	I1014 21:11:47.672742  710209 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:11:47.672809  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:11:47.701458  710209 cri.go:89] found id: ""
	I1014 21:11:47.701484  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.701493  710209 logs.go:284] No container was found matching "coredns"
	I1014 21:11:47.701500  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:11:47.701582  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:11:47.727270  710209 cri.go:89] found id: ""
	I1014 21:11:47.727300  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.727309  710209 logs.go:284] No container was found matching "kube-scheduler"
	I1014 21:11:47.727333  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:11:47.727416  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:11:47.754977  710209 cri.go:89] found id: ""
	I1014 21:11:47.755057  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.755081  710209 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:11:47.755102  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:11:47.755187  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:11:47.781560  710209 cri.go:89] found id: ""
	I1014 21:11:47.781581  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.781590  710209 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 21:11:47.781596  710209 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:11:47.781656  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:11:47.807275  710209 cri.go:89] found id: ""
	I1014 21:11:47.807299  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.807307  710209 logs.go:284] No container was found matching "kindnet"
	I1014 21:11:47.807316  710209 logs.go:123] Gathering logs for kubelet ...
	I1014 21:11:47.807328  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:11:47.894712  710209 logs.go:123] Gathering logs for dmesg ...
	I1014 21:11:47.894748  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:11:47.913155  710209 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:11:47.913185  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:11:48.010753  710209 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:11:47.999953    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.000815    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.002755    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.003450    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.005598    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 21:11:47.999953    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.000815    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.002755    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.003450    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.005598    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:11:48.010779  710209 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:11:48.010792  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:11:48.088492  710209 logs.go:123] Gathering logs for container status ...
	I1014 21:11:48.088530  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 21:11:48.120530  710209 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 21:11:48.120591  710209 out.go:285] * 
	* 
	W1014 21:11:48.120781  710209 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:11:48.120806  710209 out.go:285] * 
	* 
	W1014 21:11:48.123103  710209 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:11:48.130023  710209 out.go:203] 
	W1014 21:11:48.132817  710209 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:11:48.132855  710209 out.go:285] * 
	* 
	I1014 21:11:48.137900  710209 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-162846 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-162846 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-14 21:11:48.494398068 +0000 UTC m=+4465.912551807
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-162846
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-162846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa",
	        "Created": "2025-10-14T21:03:18.283260265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 710763,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:03:18.344416282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa/hosts",
	        "LogPath": "/var/lib/docker/containers/d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa/d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa-json.log",
	        "Name": "/force-systemd-flag-162846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-162846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-162846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d8ff84f659d8fb019412f5121d6b73dc92d90f83efcd5a4a367a6598608696fa",
	                "LowerDir": "/var/lib/docker/overlay2/d743544ea493856e7b865d1a407f31777122acf6fd4030e6b1d83895f3ebb382-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d743544ea493856e7b865d1a407f31777122acf6fd4030e6b1d83895f3ebb382/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d743544ea493856e7b865d1a407f31777122acf6fd4030e6b1d83895f3ebb382/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d743544ea493856e7b865d1a407f31777122acf6fd4030e6b1d83895f3ebb382/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-162846",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-162846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-162846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-162846",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-162846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebf07e81d564696b78c9948dbbd1c6269eca0601c70f3173857633695a70e1f3",
	            "SandboxKey": "/var/run/docker/netns/ebf07e81d564",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33762"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33763"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33766"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-162846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:b3:82:b3:49:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c65fc2ffd0d4e5aa8a8ae57321fd5e4cb9e42be42b8bdf947f328ab51cb1cc4",
	                    "EndpointID": "f1f1b37e9e5524ba25127891fd5968950c7d0c3dfd0d25f34db6a94bc995b2a2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-162846",
	                        "d8ff84f659d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-162846 -n force-systemd-flag-162846
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-162846 -n force-systemd-flag-162846: exit status 6 (306.257121ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 21:11:48.801324  719122 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-162846" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-162846 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-810782 sudo systemctl cat kubelet --no-pager                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status docker --all --full --no-pager                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat docker --no-pager                                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/docker/daemon.json                                                          │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo docker system info                                                                   │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cri-dockerd --version                                                                │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat containerd --no-pager                                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/containerd/config.toml                                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo containerd config dump                                                               │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status crio --all --full --no-pager                                        │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat crio --no-pager                                                        │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo crio config                                                                          │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-347955  │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-162846 │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:04:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:04:23.925997  715133 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:04:23.926134  715133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:23.926145  715133 out.go:374] Setting ErrFile to fd 2...
	I1014 21:04:23.926150  715133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:23.926516  715133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:04:23.926984  715133 out.go:368] Setting JSON to false
	I1014 21:04:23.927883  715133 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13614,"bootTime":1760462250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:04:23.927983  715133 start.go:141] virtualization:  
	I1014 21:04:23.931310  715133 out.go:179] * [force-systemd-env-347955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:04:23.935125  715133 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:04:23.935353  715133 notify.go:220] Checking for updates...
	I1014 21:04:23.941411  715133 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:04:23.944386  715133 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:04:23.947224  715133 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:04:23.950061  715133 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:04:23.953037  715133 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1014 21:04:23.956488  715133 config.go:182] Loaded profile config "force-systemd-flag-162846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:04:23.956638  715133 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:04:23.982633  715133 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:04:23.982778  715133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:04:24.042349  715133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:04:24.032744423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:04:24.042465  715133 docker.go:318] overlay module found
	I1014 21:04:24.045597  715133 out.go:179] * Using the docker driver based on user configuration
	I1014 21:04:24.048456  715133 start.go:305] selected driver: docker
	I1014 21:04:24.048475  715133 start.go:925] validating driver "docker" against <nil>
	I1014 21:04:24.048490  715133 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:04:24.049258  715133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:04:24.112515  715133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:04:24.103639047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:04:24.112676  715133 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:04:24.112893  715133 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 21:04:24.115785  715133 out.go:179] * Using Docker driver with root privileges
	I1014 21:04:24.118668  715133 cni.go:84] Creating CNI manager for ""
	I1014 21:04:24.118742  715133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:04:24.118755  715133 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:04:24.118836  715133 start.go:349] cluster config:
	{Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:04:24.122012  715133 out.go:179] * Starting "force-systemd-env-347955" primary control-plane node in "force-systemd-env-347955" cluster
	I1014 21:04:24.124919  715133 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:04:24.127835  715133 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:04:24.130783  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:24.130855  715133 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:04:24.130868  715133 cache.go:58] Caching tarball of preloaded images
	I1014 21:04:24.130980  715133 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:04:24.131000  715133 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:04:24.131120  715133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json ...
	I1014 21:04:24.131144  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json: {Name:mk930b382e067504a934d5eaa3f0bbe902c63660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:24.131315  715133 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:04:24.150404  715133 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:04:24.150428  715133 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:04:24.150445  715133 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:04:24.150468  715133 start.go:360] acquireMachinesLock for force-systemd-env-347955: {Name:mk83900274819c051b23db1645ae97a266881daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:04:24.150589  715133 start.go:364] duration metric: took 101.655µs to acquireMachinesLock for "force-systemd-env-347955"
	I1014 21:04:24.150627  715133 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:04:24.150700  715133 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:04:24.154048  715133 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:04:24.154281  715133 start.go:159] libmachine.API.Create for "force-systemd-env-347955" (driver="docker")
	I1014 21:04:24.154329  715133 client.go:168] LocalClient.Create starting
	I1014 21:04:24.154416  715133 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:04:24.154455  715133 main.go:141] libmachine: Decoding PEM data...
	I1014 21:04:24.154472  715133 main.go:141] libmachine: Parsing certificate...
	I1014 21:04:24.154524  715133 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:04:24.154548  715133 main.go:141] libmachine: Decoding PEM data...
	I1014 21:04:24.154561  715133 main.go:141] libmachine: Parsing certificate...
	I1014 21:04:24.154940  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:04:24.171598  715133 cli_runner.go:211] docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:04:24.171680  715133 network_create.go:284] running [docker network inspect force-systemd-env-347955] to gather additional debugging logs...
	I1014 21:04:24.171710  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955
	W1014 21:04:24.188546  715133 cli_runner.go:211] docker network inspect force-systemd-env-347955 returned with exit code 1
	I1014 21:04:24.188579  715133 network_create.go:287] error running [docker network inspect force-systemd-env-347955]: docker network inspect force-systemd-env-347955: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-347955 not found
	I1014 21:04:24.188594  715133 network_create.go:289] output of [docker network inspect force-systemd-env-347955]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-347955 not found
	
	** /stderr **
	I1014 21:04:24.188694  715133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:04:24.205834  715133 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:04:24.206136  715133 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:04:24.206452  715133 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:04:24.206789  715133 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5c65fc2ffd0d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:4c:5b:88:44:fa} reservation:<nil>}
	I1014 21:04:24.207236  715133 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9dd0}
	I1014 21:04:24.207256  715133 network_create.go:124] attempt to create docker network force-systemd-env-347955 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:04:24.207313  715133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-347955 force-systemd-env-347955
	I1014 21:04:24.264122  715133 network_create.go:108] docker network force-systemd-env-347955 192.168.85.0/24 created
	I1014 21:04:24.264151  715133 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-347955" container
	I1014 21:04:24.264246  715133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:04:24.280701  715133 cli_runner.go:164] Run: docker volume create force-systemd-env-347955 --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:04:24.298534  715133 oci.go:103] Successfully created a docker volume force-systemd-env-347955
	I1014 21:04:24.298629  715133 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-347955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --entrypoint /usr/bin/test -v force-systemd-env-347955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:04:24.866435  715133 oci.go:107] Successfully prepared a docker volume force-systemd-env-347955
	I1014 21:04:24.866483  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:24.866519  715133 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:04:24.866587  715133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:04:29.254306  715133 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.387660998s)
	I1014 21:04:29.254340  715133 kic.go:203] duration metric: took 4.387833719s to extract preloaded images to volume ...
	W1014 21:04:29.254485  715133 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:04:29.254591  715133 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:04:29.306489  715133 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-347955 --name force-systemd-env-347955 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-347955 --network force-systemd-env-347955 --ip 192.168.85.2 --volume force-systemd-env-347955:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:04:29.657033  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Running}}
	I1014 21:04:29.680613  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:29.704604  715133 cli_runner.go:164] Run: docker exec force-systemd-env-347955 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:04:29.756505  715133 oci.go:144] the created container "force-systemd-env-347955" has a running status.
	I1014 21:04:29.756533  715133 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa...
	I1014 21:04:29.969659  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 21:04:29.969708  715133 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:04:29.992619  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:30.018777  715133 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:04:30.018800  715133 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-347955 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:04:30.104773  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:30.131695  715133 machine.go:93] provisionDockerMachine start ...
	I1014 21:04:30.131801  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:30.161396  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:30.161751  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:30.161762  715133 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:04:30.162359  715133 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36080->127.0.0.1:33767: read: connection reset by peer
	I1014 21:04:33.325004  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-347955
	
	I1014 21:04:33.325053  715133 ubuntu.go:182] provisioning hostname "force-systemd-env-347955"
	I1014 21:04:33.325164  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:33.342467  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:33.342786  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:33.342804  715133 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-347955 && echo "force-systemd-env-347955" | sudo tee /etc/hostname
	I1014 21:04:33.497211  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-347955
	
	I1014 21:04:33.497297  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:33.514955  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:33.515276  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:33.515298  715133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-347955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-347955/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-347955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:04:33.661185  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:04:33.661214  715133 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:04:33.661247  715133 ubuntu.go:190] setting up certificates
	I1014 21:04:33.661257  715133 provision.go:84] configureAuth start
	I1014 21:04:33.661340  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:33.684264  715133 provision.go:143] copyHostCerts
	I1014 21:04:33.684329  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:04:33.684371  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:04:33.684384  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:04:33.684459  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:04:33.684544  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:04:33.684568  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:04:33.684577  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:04:33.684605  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:04:33.684647  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:04:33.684666  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:04:33.684674  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:04:33.684697  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:04:33.684746  715133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-347955 san=[127.0.0.1 192.168.85.2 force-systemd-env-347955 localhost minikube]
	I1014 21:04:34.390686  715133 provision.go:177] copyRemoteCerts
	I1014 21:04:34.390777  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:04:34.390833  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.407284  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:34.511935  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 21:04:34.511990  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:04:34.529156  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 21:04:34.529218  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1014 21:04:34.546863  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 21:04:34.546939  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:04:34.564667  715133 provision.go:87] duration metric: took 903.386257ms to configureAuth
	I1014 21:04:34.564697  715133 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:04:34.564881  715133 config.go:182] Loaded profile config "force-systemd-env-347955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:04:34.565003  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.582097  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:34.582406  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:34.582425  715133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:04:34.830671  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:04:34.830693  715133 machine.go:96] duration metric: took 4.698976956s to provisionDockerMachine
	I1014 21:04:34.830703  715133 client.go:171] duration metric: took 10.67636263s to LocalClient.Create
	I1014 21:04:34.830724  715133 start.go:167] duration metric: took 10.676443829s to libmachine.API.Create "force-systemd-env-347955"
	I1014 21:04:34.830733  715133 start.go:293] postStartSetup for "force-systemd-env-347955" (driver="docker")
	I1014 21:04:34.830742  715133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:04:34.830804  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:04:34.830842  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.849240  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:34.954210  715133 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:04:34.957954  715133 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:04:34.957987  715133 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:04:34.957999  715133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:04:34.958063  715133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:04:34.958148  715133 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:04:34.958160  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 21:04:34.958276  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:04:34.965883  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:04:34.983714  715133 start.go:296] duration metric: took 152.966774ms for postStartSetup
	I1014 21:04:34.984088  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:35.001755  715133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json ...
	I1014 21:04:35.002100  715133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:04:35.002157  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.021758  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.121668  715133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:04:35.126863  715133 start.go:128] duration metric: took 10.976147826s to createHost
	I1014 21:04:35.126892  715133 start.go:83] releasing machines lock for "force-systemd-env-347955", held for 10.976288783s
	I1014 21:04:35.126973  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:35.144722  715133 ssh_runner.go:195] Run: cat /version.json
	I1014 21:04:35.144792  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.145044  715133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:04:35.145112  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.162942  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.177347  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.267998  715133 ssh_runner.go:195] Run: systemctl --version
	I1014 21:04:35.357856  715133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:04:35.396151  715133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:04:35.400974  715133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:04:35.401043  715133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:04:35.429925  715133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:04:35.429951  715133 start.go:495] detecting cgroup driver to use...
	I1014 21:04:35.429968  715133 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1014 21:04:35.430022  715133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:04:35.448694  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:04:35.461972  715133 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:04:35.462064  715133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:04:35.478655  715133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:04:35.497579  715133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:04:35.620703  715133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:04:35.752096  715133 docker.go:234] disabling docker service ...
	I1014 21:04:35.752163  715133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:04:35.795071  715133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:04:35.812904  715133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:04:35.945893  715133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:04:36.061606  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:04:36.076130  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:04:36.091652  715133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:04:36.091763  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.101319  715133 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 21:04:36.101444  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.110734  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.119741  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.129736  715133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:04:36.138242  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.147132  715133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.160913  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.169871  715133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:04:36.177537  715133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:04:36.185128  715133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:04:36.293184  715133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:04:36.423477  715133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:04:36.423618  715133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:04:36.427682  715133 start.go:563] Will wait 60s for crictl version
	I1014 21:04:36.427754  715133 ssh_runner.go:195] Run: which crictl
	I1014 21:04:36.431575  715133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:04:36.456901  715133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:04:36.456986  715133 ssh_runner.go:195] Run: crio --version
	I1014 21:04:36.488421  715133 ssh_runner.go:195] Run: crio --version
	I1014 21:04:36.519433  715133 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:04:36.522157  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:04:36.538814  715133 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:04:36.542791  715133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:04:36.552500  715133 kubeadm.go:883] updating cluster {Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:04:36.552612  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:36.552670  715133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:04:36.585953  715133 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:04:36.585975  715133 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:04:36.586033  715133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:04:36.615167  715133 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:04:36.615188  715133 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:04:36.615195  715133 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:04:36.615283  715133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-347955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:04:36.615362  715133 ssh_runner.go:195] Run: crio config
	I1014 21:04:36.689628  715133 cni.go:84] Creating CNI manager for ""
	I1014 21:04:36.689650  715133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:04:36.689669  715133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:04:36.689692  715133 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-347955 NodeName:force-systemd-env-347955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:04:36.689831  715133 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-347955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:04:36.689910  715133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:04:36.698146  715133 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:04:36.698237  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:04:36.705932  715133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1014 21:04:36.719304  715133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:04:36.732900  715133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1014 21:04:36.746060  715133 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:04:36.749766  715133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:04:36.759484  715133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:04:36.881905  715133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:04:36.898363  715133 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955 for IP: 192.168.85.2
	I1014 21:04:36.898429  715133 certs.go:195] generating shared ca certs ...
	I1014 21:04:36.898467  715133 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:36.898639  715133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:04:36.898712  715133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:04:36.898751  715133 certs.go:257] generating profile certs ...
	I1014 21:04:36.898832  715133 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key
	I1014 21:04:36.898869  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt with IP's: []
	I1014 21:04:37.241019  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt ...
	I1014 21:04:37.241054  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt: {Name:mkc06be81668f24c404438108816b1c475f15366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.241260  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key ...
	I1014 21:04:37.241275  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key: {Name:mk922553fe25eb2c06611c488df204aba4cd3b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.241377  715133 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba
	I1014 21:04:37.241397  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:04:37.424345  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba ...
	I1014 21:04:37.424375  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba: {Name:mk6371f088862bcbe09cf9fb59da3a2d83b274d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.424561  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba ...
	I1014 21:04:37.424576  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba: {Name:mk0316c1f89eff3d75875036e8bb28dace7ec40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.424662  715133 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt
	I1014 21:04:37.424747  715133 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key
	I1014 21:04:37.424808  715133 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key
	I1014 21:04:37.424826  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt with IP's: []
	I1014 21:04:37.936954  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt ...
	I1014 21:04:37.936987  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt: {Name:mka78d1eceb1edeb04c372f6643fa742b67d5787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.937170  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key ...
	I1014 21:04:37.937184  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key: {Name:mkc87ee9795d29ad6e431b66fc6016ff292df6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.937267  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 21:04:37.937289  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 21:04:37.937303  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 21:04:37.937322  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 21:04:37.937342  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 21:04:37.937354  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 21:04:37.937373  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 21:04:37.937384  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 21:04:37.937444  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:04:37.937492  715133 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:04:37.937505  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:04:37.937530  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:04:37.937557  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:04:37.937583  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:04:37.937632  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:04:37.937662  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:37.937679  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 21:04:37.937691  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 21:04:37.938290  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:04:37.957095  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:04:37.976185  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:04:37.994131  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:04:38.019056  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 21:04:38.039425  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:04:38.059547  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:04:38.079536  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:04:38.099856  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:04:38.119952  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:04:38.139213  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:04:38.157975  715133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:04:38.171318  715133 ssh_runner.go:195] Run: openssl version
	I1014 21:04:38.177797  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:04:38.186991  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.191027  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.191098  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.232410  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:04:38.241190  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:04:38.249792  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.253856  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.253926  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.300234  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:04:38.308870  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:04:38.317772  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.321785  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.321853  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.363445  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:04:38.372017  715133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:04:38.375579  715133 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:04:38.375634  715133 kubeadm.go:400] StartCluster: {Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:04:38.375714  715133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:04:38.375775  715133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:04:38.404197  715133 cri.go:89] found id: ""
	I1014 21:04:38.404270  715133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:04:38.412598  715133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:04:38.420719  715133 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:04:38.420826  715133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:04:38.429151  715133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:04:38.429174  715133 kubeadm.go:157] found existing configuration files:
	
	I1014 21:04:38.429230  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:04:38.437397  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:04:38.437496  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:04:38.445514  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:04:38.453560  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:04:38.453627  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:04:38.461524  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:04:38.469409  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:04:38.469518  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:04:38.476879  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:04:38.484772  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:04:38.484849  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:04:38.492125  715133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:04:38.569175  715133 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:04:38.569419  715133 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:04:38.645760  715133 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:07:43.103676  710209 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000439994s
	I1014 21:07:43.103826  710209 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000025391s
	I1014 21:07:43.104090  710209 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000999755s
	I1014 21:07:43.104245  710209 kubeadm.go:318] 
	I1014 21:07:43.104370  710209 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:07:43.104459  710209 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:07:43.104555  710209 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:07:43.104816  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:07:43.104902  710209 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:07:43.104984  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:07:43.104989  710209 kubeadm.go:318] 
	I1014 21:07:43.109030  710209 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:07:43.109290  710209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:07:43.109404  710209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:07:43.110028  710209 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	W1014 21:07:43.110236  710209 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-162846 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501978896s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000439994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000025391s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000999755s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 21:07:43.110315  710209 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 21:07:43.110647  710209 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:07:43.656042  710209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:07:43.670120  710209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:07:43.670191  710209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:07:43.678813  710209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:07:43.678843  710209 kubeadm.go:157] found existing configuration files:
	
	I1014 21:07:43.678900  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:07:43.686947  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:07:43.687009  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:07:43.694635  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:07:43.702645  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:07:43.702709  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:07:43.710378  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:07:43.718472  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:07:43.718535  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:07:43.725735  710209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:07:43.735047  710209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:07:43.735110  710209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:07:43.742470  710209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:07:43.794524  710209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:07:43.794884  710209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:07:43.821460  710209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:07:43.821548  710209 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:07:43.821598  710209 kubeadm.go:318] OS: Linux
	I1014 21:07:43.821663  710209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:07:43.821737  710209 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:07:43.821802  710209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:07:43.821862  710209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:07:43.821923  710209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:07:43.821997  710209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:07:43.822067  710209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:07:43.822133  710209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:07:43.822197  710209 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:07:43.889082  710209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:07:43.889198  710209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:07:43.889299  710209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:07:43.896260  710209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:07:43.901268  710209 out.go:252]   - Generating certificates and keys ...
	I1014 21:07:43.901398  710209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:07:43.901499  710209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:07:43.901599  710209 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 21:07:43.901696  710209 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 21:07:43.901787  710209 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 21:07:43.901858  710209 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 21:07:43.901951  710209 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 21:07:43.902036  710209 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 21:07:43.902146  710209 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 21:07:43.902238  710209 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 21:07:43.902288  710209 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 21:07:43.902355  710209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:07:44.070649  710209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:07:44.564811  710209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:07:45.219514  710209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:07:46.061857  710209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:07:46.433987  710209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:07:46.434680  710209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:07:46.437611  710209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:07:46.441901  710209 out.go:252]   - Booting up control plane ...
	I1014 21:07:46.442002  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:07:46.442084  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:07:46.443058  710209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:07:46.462479  710209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:07:46.462783  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:07:46.471001  710209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:07:46.471115  710209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:07:46.471166  710209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:07:46.604443  710209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:07:46.604589  710209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:07:47.605020  710209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000938261s
	I1014 21:07:47.608731  710209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:07:47.608862  710209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1014 21:07:47.608997  710209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:07:47.609097  710209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:08:47.972269  715133 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 21:08:47.972392  715133 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:08:47.975912  715133 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:08:47.975972  715133 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:08:47.976119  715133 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:08:47.976204  715133 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:08:47.976253  715133 kubeadm.go:318] OS: Linux
	I1014 21:08:47.976354  715133 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:08:47.976432  715133 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:08:47.976487  715133 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:08:47.976545  715133 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:08:47.976599  715133 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:08:47.976672  715133 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:08:47.976724  715133 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:08:47.976782  715133 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:08:47.976840  715133 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:08:47.976922  715133 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:08:47.977031  715133 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:08:47.977128  715133 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:08:47.977201  715133 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:08:47.981297  715133 out.go:252]   - Generating certificates and keys ...
	I1014 21:08:47.981409  715133 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:08:47.981486  715133 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:08:47.981564  715133 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:08:47.981631  715133 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:08:47.981709  715133 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:08:47.981771  715133 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:08:47.981837  715133 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:08:47.981986  715133 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:08:47.982048  715133 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:08:47.982191  715133 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:08:47.982268  715133 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:08:47.982342  715133 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:08:47.982396  715133 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:08:47.982461  715133 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:08:47.982521  715133 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:08:47.982588  715133 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:08:47.982652  715133 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:08:47.982726  715133 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:08:47.982802  715133 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:08:47.982898  715133 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:08:47.982975  715133 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:08:47.985888  715133 out.go:252]   - Booting up control plane ...
	I1014 21:08:47.986020  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:08:47.986118  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:08:47.986200  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:08:47.986328  715133 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:08:47.986442  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:08:47.986555  715133 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:08:47.986647  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:08:47.986695  715133 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:08:47.986866  715133 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:08:47.987025  715133 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:08:47.987104  715133 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50184272s
	I1014 21:08:47.987213  715133 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:08:47.987317  715133 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:08:47.987430  715133 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:08:47.987524  715133 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:08:47.987607  715133 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000155897s
	I1014 21:08:47.987689  715133 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000452065s
	I1014 21:08:47.987782  715133 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000190219s
	I1014 21:08:47.987792  715133 kubeadm.go:318] 
	I1014 21:08:47.987887  715133 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:08:47.987976  715133 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:08:47.988078  715133 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:08:47.988180  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:08:47.988261  715133 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:08:47.988361  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	W1014 21:08:47.988493  715133 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50184272s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000155897s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000452065s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000190219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 21:08:47.988583  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 21:08:47.988845  715133 kubeadm.go:318] 
	I1014 21:08:48.545522  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:08:48.559072  715133 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:08:48.559143  715133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:08:48.567048  715133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:08:48.567068  715133 kubeadm.go:157] found existing configuration files:
	
	I1014 21:08:48.567121  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:08:48.574764  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:08:48.574847  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:08:48.582554  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:08:48.590361  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:08:48.590449  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:08:48.597910  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:08:48.605579  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:08:48.605643  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:08:48.613326  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:08:48.621004  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:08:48.621114  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:08:48.629121  715133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:08:48.671322  715133 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:08:48.671386  715133 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:08:48.694184  715133 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:08:48.694310  715133 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:08:48.694378  715133 kubeadm.go:318] OS: Linux
	I1014 21:08:48.694451  715133 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:08:48.694527  715133 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:08:48.694602  715133 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:08:48.694680  715133 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:08:48.694755  715133 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:08:48.694861  715133 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:08:48.694940  715133 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:08:48.695023  715133 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:08:48.695104  715133 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:08:48.766821  715133 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:08:48.766941  715133 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:08:48.767070  715133 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:08:48.780828  715133 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:08:48.787567  715133 out.go:252]   - Generating certificates and keys ...
	I1014 21:08:48.787732  715133 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:08:48.787839  715133 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:08:48.787962  715133 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 21:08:48.788061  715133 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 21:08:48.788178  715133 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 21:08:48.788269  715133 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 21:08:48.788388  715133 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 21:08:48.788502  715133 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 21:08:48.788612  715133 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 21:08:48.788738  715133 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 21:08:48.788810  715133 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 21:08:48.788905  715133 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:08:49.061981  715133 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:08:49.303765  715133 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:08:49.518251  715133 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:08:49.957458  715133 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:08:50.488659  715133 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:08:50.489275  715133 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:08:50.492880  715133 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:08:50.495976  715133 out.go:252]   - Booting up control plane ...
	I1014 21:08:50.496093  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:08:50.496177  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:08:50.496939  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:08:50.512756  715133 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:08:50.512945  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:08:50.521230  715133 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:08:50.521507  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:08:50.521784  715133 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:08:50.655527  715133 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:08:50.655651  715133 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:08:52.156909  715133 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50149622s
	I1014 21:08:52.160621  715133 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:08:52.160949  715133 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:08:52.161228  715133 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:08:52.161326  715133 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:11:47.609755  710209 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	I1014 21:11:47.610069  710209 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	I1014 21:11:47.610188  710209 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	I1014 21:11:47.610272  710209 kubeadm.go:318] 
	I1014 21:11:47.610393  710209 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:11:47.610506  710209 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:11:47.610603  710209 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:11:47.610766  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:11:47.610846  710209 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:11:47.610956  710209 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:11:47.610962  710209 kubeadm.go:318] 
	I1014 21:11:47.615630  710209 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:11:47.615901  710209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:11:47.616023  710209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:11:47.616674  710209 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 21:11:47.616764  710209 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:11:47.616843  710209 kubeadm.go:402] duration metric: took 8m18.572847211s to StartCluster
	I1014 21:11:47.616897  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:11:47.616970  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:11:47.643132  710209 cri.go:89] found id: ""
	I1014 21:11:47.643168  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.643177  710209 logs.go:284] No container was found matching "kube-apiserver"
	I1014 21:11:47.643184  710209 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:11:47.643242  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:11:47.672697  710209 cri.go:89] found id: ""
	I1014 21:11:47.672726  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.672735  710209 logs.go:284] No container was found matching "etcd"
	I1014 21:11:47.672742  710209 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:11:47.672809  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:11:47.701458  710209 cri.go:89] found id: ""
	I1014 21:11:47.701484  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.701493  710209 logs.go:284] No container was found matching "coredns"
	I1014 21:11:47.701500  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:11:47.701582  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:11:47.727270  710209 cri.go:89] found id: ""
	I1014 21:11:47.727300  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.727309  710209 logs.go:284] No container was found matching "kube-scheduler"
	I1014 21:11:47.727333  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:11:47.727416  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:11:47.754977  710209 cri.go:89] found id: ""
	I1014 21:11:47.755057  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.755081  710209 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:11:47.755102  710209 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:11:47.755187  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:11:47.781560  710209 cri.go:89] found id: ""
	I1014 21:11:47.781581  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.781590  710209 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 21:11:47.781596  710209 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:11:47.781656  710209 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:11:47.807275  710209 cri.go:89] found id: ""
	I1014 21:11:47.807299  710209 logs.go:282] 0 containers: []
	W1014 21:11:47.807307  710209 logs.go:284] No container was found matching "kindnet"
	I1014 21:11:47.807316  710209 logs.go:123] Gathering logs for kubelet ...
	I1014 21:11:47.807328  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:11:47.894712  710209 logs.go:123] Gathering logs for dmesg ...
	I1014 21:11:47.894748  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:11:47.913155  710209 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:11:47.913185  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:11:48.010753  710209 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:11:47.999953    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.000815    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.002755    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.003450    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.005598    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 21:11:47.999953    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.000815    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.002755    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.003450    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:48.005598    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:11:48.010779  710209 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:11:48.010792  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:11:48.088492  710209 logs.go:123] Gathering logs for container status ...
	I1014 21:11:48.088530  710209 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 21:11:48.120530  710209 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 21:11:48.120591  710209 out.go:285] * 
	W1014 21:11:48.120781  710209 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:11:48.120806  710209 out.go:285] * 
	W1014 21:11:48.123103  710209 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:11:48.130023  710209 out.go:203] 
	W1014 21:11:48.132817  710209 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000938261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000908624s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000449893s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001217756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:11:48.132855  710209 out.go:285] * 
	I1014 21:11:48.137900  710209 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.416300346Z" level=info msg="createCtr: deleting container ID 6b814e14588c7128dfdefcc0015055cf0a1e250a4746a6f9edd920f8ff28b171 from idIndex" id=a8fd6b85-4204-424d-8e78-b836b539cf06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.416478088Z" level=info msg="createCtr: removing container 6b814e14588c7128dfdefcc0015055cf0a1e250a4746a6f9edd920f8ff28b171" id=a8fd6b85-4204-424d-8e78-b836b539cf06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.416528189Z" level=info msg="createCtr: deleting container 6b814e14588c7128dfdefcc0015055cf0a1e250a4746a6f9edd920f8ff28b171 from storage" id=a8fd6b85-4204-424d-8e78-b836b539cf06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.419477941Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9a2f8e5e-bded-4849-a444-f1048d3194dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.420934528Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4edc2255-3ceb-43fa-9d60-253e1c297cef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.423511893Z" level=info msg="createCtr: deleting container ID 8ebfda985fcd290a93848c84480ff1dacf087be66835c4586e9c3a37c006b240 from idIndex" id=4edc2255-3ceb-43fa-9d60-253e1c297cef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.423657832Z" level=info msg="createCtr: removing container 8ebfda985fcd290a93848c84480ff1dacf087be66835c4586e9c3a37c006b240" id=4edc2255-3ceb-43fa-9d60-253e1c297cef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.423765829Z" level=info msg="createCtr: deleting container 8ebfda985fcd290a93848c84480ff1dacf087be66835c4586e9c3a37c006b240 from storage" id=4edc2255-3ceb-43fa-9d60-253e1c297cef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.42710108Z" level=info msg="createCtr: deleting container ID e3e919ce82dbeb14adb102443ed2b9ea35facead830491502d902ea9d850abd5 from idIndex" id=9a2f8e5e-bded-4849-a444-f1048d3194dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.427140933Z" level=info msg="createCtr: removing container e3e919ce82dbeb14adb102443ed2b9ea35facead830491502d902ea9d850abd5" id=9a2f8e5e-bded-4849-a444-f1048d3194dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.427178718Z" level=info msg="createCtr: deleting container e3e919ce82dbeb14adb102443ed2b9ea35facead830491502d902ea9d850abd5 from storage" id=9a2f8e5e-bded-4849-a444-f1048d3194dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.423616354Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-162846_kube-system_b826e3acdeaf472fbdb3cdcf859d7989_0" id=a8fd6b85-4204-424d-8e78-b836b539cf06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.428716971Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-162846_kube-system_ea3b320e227c2e3301130319b971826e_0" id=4edc2255-3ceb-43fa-9d60-253e1c297cef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:38 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:38.431732374Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-162846_kube-system_ed777af726ea6fbdfeab8d21b53de466_0" id=9a2f8e5e-bded-4849-a444-f1048d3194dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.370300063Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=39e4b87c-9b05-4a9d-a1ab-d50ec3810d05 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.371148969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=8e3212d9-6dfa-450f-907b-f8c23ba4978c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.371988669Z" level=info msg="Creating container: kube-system/etcd-force-systemd-flag-162846/etcd" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.372372084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.37685931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.377449095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.389043932Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.390184781Z" level=info msg="createCtr: deleting container ID 6e02256d234c5bb46ef572f66312d39c78dfb59674c62272cefa24610961aba5 from idIndex" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.390223149Z" level=info msg="createCtr: removing container 6e02256d234c5bb46ef572f66312d39c78dfb59674c62272cefa24610961aba5" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.390256265Z" level=info msg="createCtr: deleting container 6e02256d234c5bb46ef572f66312d39c78dfb59674c62272cefa24610961aba5 from storage" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:11:40 force-systemd-flag-162846 crio[840]: time="2025-10-14T21:11:40.392925882Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-162846_kube-system_35d7b99a7582e6fffbfb766c260db013_0" id=90a74862-1ec8-449c-bd51-37fbcc2828aa name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:11:49.453305    2497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:49.453905    2497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:49.455586    2497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:49.456177    2497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:11:49.457358    2497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct14 20:34] overlayfs: idmapped layers are currently not supported
	[  +3.850661] overlayfs: idmapped layers are currently not supported
	[Oct14 20:35] overlayfs: idmapped layers are currently not supported
	[Oct14 20:36] overlayfs: idmapped layers are currently not supported
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:11:49 up  3:54,  0 user,  load average: 0.02, 0.69, 1.40
	Linux force-systemd-flag-162846 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 21:11:38 force-systemd-flag-162846 kubelet[1796]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-162846_kube-system(ed777af726ea6fbdfeab8d21b53de466): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:11:38 force-systemd-flag-162846 kubelet[1796]:  > logger="UnhandledError"
	Oct 14 21:11:38 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:38.432160    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-162846" podUID="ed777af726ea6fbdfeab8d21b53de466"
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:40.369905    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-162846\" not found" node="force-systemd-flag-162846"
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:40.393262    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]:  > podSandboxID="929e06fc8bfb08c2537898dfbf880400fa01cd2413a7fd2e1e49c23f2f62c041"
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:40.393365    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]:         container etcd start failed in pod etcd-force-systemd-flag-162846_kube-system(35d7b99a7582e6fffbfb766c260db013): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]:  > logger="UnhandledError"
	Oct 14 21:11:40 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:40.393396    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-162846" podUID="35d7b99a7582e6fffbfb766c260db013"
	Oct 14 21:11:41 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:41.703315    1796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-162846.186e77a7e8c9ef22  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-162846,UID:force-systemd-flag-162846,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-162846 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-162846,},FirstTimestamp:2025-10-14 21:07:47.397234466 +0000 UTC m=+0.792175714,LastTimestamp:2025-10-14 21:07:47.397234466 +0000 UTC m=+0.792175714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-162846,}"
	Oct 14 21:11:44 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:44.012272    1796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-162846?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 14 21:11:44 force-systemd-flag-162846 kubelet[1796]: I1014 21:11:44.189650    1796 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-162846"
	Oct 14 21:11:44 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:44.190034    1796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-flag-162846"
	Oct 14 21:11:46 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:46.480416    1796 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 21:11:47 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:47.425434    1796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-162846\" not found"
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:49.370091    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-162846\" not found" node="force-systemd-flag-162846"
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:49.397303    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]:  > podSandboxID="09bf4219f984a5b48f831211219bc570ddc93849c593ebdf0513b8ed81d1c8f4"
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:49.397400    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-162846_kube-system(ed777af726ea6fbdfeab8d21b53de466): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]:  > logger="UnhandledError"
	Oct 14 21:11:49 force-systemd-flag-162846 kubelet[1796]: E1014 21:11:49.397448    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-162846" podUID="ed777af726ea6fbdfeab8d21b53de466"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-162846 -n force-systemd-flag-162846
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-162846 -n force-systemd-flag-162846: exit status 6 (339.076322ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 21:11:49.919853  719334 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-162846" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-162846" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-162846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-162846
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-162846: (1.918118507s)
--- FAIL: TestForceSystemdFlag (519.34s)

                                                
                                    
x
+
TestForceSystemdEnv (512.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1014 21:06:44.392639  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:08:41.309462  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:11:44.392622  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m28.817215257s)

                                                
                                                
-- stdout --
	* [force-systemd-env-347955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-347955" primary control-plane node in "force-systemd-env-347955" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:04:23.925997  715133 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:04:23.926134  715133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:23.926145  715133 out.go:374] Setting ErrFile to fd 2...
	I1014 21:04:23.926150  715133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:23.926516  715133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:04:23.926984  715133 out.go:368] Setting JSON to false
	I1014 21:04:23.927883  715133 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13614,"bootTime":1760462250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:04:23.927983  715133 start.go:141] virtualization:  
	I1014 21:04:23.931310  715133 out.go:179] * [force-systemd-env-347955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:04:23.935125  715133 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:04:23.935353  715133 notify.go:220] Checking for updates...
	I1014 21:04:23.941411  715133 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:04:23.944386  715133 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:04:23.947224  715133 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:04:23.950061  715133 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:04:23.953037  715133 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1014 21:04:23.956488  715133 config.go:182] Loaded profile config "force-systemd-flag-162846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:04:23.956638  715133 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:04:23.982633  715133 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:04:23.982778  715133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:04:24.042349  715133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:04:24.032744423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:04:24.042465  715133 docker.go:318] overlay module found
	I1014 21:04:24.045597  715133 out.go:179] * Using the docker driver based on user configuration
	I1014 21:04:24.048456  715133 start.go:305] selected driver: docker
	I1014 21:04:24.048475  715133 start.go:925] validating driver "docker" against <nil>
	I1014 21:04:24.048490  715133 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:04:24.049258  715133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:04:24.112515  715133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:04:24.103639047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:04:24.112676  715133 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:04:24.112893  715133 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 21:04:24.115785  715133 out.go:179] * Using Docker driver with root privileges
	I1014 21:04:24.118668  715133 cni.go:84] Creating CNI manager for ""
	I1014 21:04:24.118742  715133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:04:24.118755  715133 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:04:24.118836  715133 start.go:349] cluster config:
	{Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:04:24.122012  715133 out.go:179] * Starting "force-systemd-env-347955" primary control-plane node in "force-systemd-env-347955" cluster
	I1014 21:04:24.124919  715133 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:04:24.127835  715133 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:04:24.130783  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:24.130855  715133 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:04:24.130868  715133 cache.go:58] Caching tarball of preloaded images
	I1014 21:04:24.130980  715133 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:04:24.131000  715133 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:04:24.131120  715133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json ...
	I1014 21:04:24.131144  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json: {Name:mk930b382e067504a934d5eaa3f0bbe902c63660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:24.131315  715133 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:04:24.150404  715133 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:04:24.150428  715133 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:04:24.150445  715133 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:04:24.150468  715133 start.go:360] acquireMachinesLock for force-systemd-env-347955: {Name:mk83900274819c051b23db1645ae97a266881daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:04:24.150589  715133 start.go:364] duration metric: took 101.655µs to acquireMachinesLock for "force-systemd-env-347955"
	I1014 21:04:24.150627  715133 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:04:24.150700  715133 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:04:24.154048  715133 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:04:24.154281  715133 start.go:159] libmachine.API.Create for "force-systemd-env-347955" (driver="docker")
	I1014 21:04:24.154329  715133 client.go:168] LocalClient.Create starting
	I1014 21:04:24.154416  715133 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:04:24.154455  715133 main.go:141] libmachine: Decoding PEM data...
	I1014 21:04:24.154472  715133 main.go:141] libmachine: Parsing certificate...
	I1014 21:04:24.154524  715133 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:04:24.154548  715133 main.go:141] libmachine: Decoding PEM data...
	I1014 21:04:24.154561  715133 main.go:141] libmachine: Parsing certificate...
	I1014 21:04:24.154940  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:04:24.171598  715133 cli_runner.go:211] docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:04:24.171680  715133 network_create.go:284] running [docker network inspect force-systemd-env-347955] to gather additional debugging logs...
	I1014 21:04:24.171710  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955
	W1014 21:04:24.188546  715133 cli_runner.go:211] docker network inspect force-systemd-env-347955 returned with exit code 1
	I1014 21:04:24.188579  715133 network_create.go:287] error running [docker network inspect force-systemd-env-347955]: docker network inspect force-systemd-env-347955: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-347955 not found
	I1014 21:04:24.188594  715133 network_create.go:289] output of [docker network inspect force-systemd-env-347955]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-347955 not found
	
	** /stderr **
	I1014 21:04:24.188694  715133 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:04:24.205834  715133 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:04:24.206136  715133 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:04:24.206452  715133 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:04:24.206789  715133 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5c65fc2ffd0d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:4c:5b:88:44:fa} reservation:<nil>}
	I1014 21:04:24.207236  715133 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9dd0}
	I1014 21:04:24.207256  715133 network_create.go:124] attempt to create docker network force-systemd-env-347955 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:04:24.207313  715133 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-347955 force-systemd-env-347955
	I1014 21:04:24.264122  715133 network_create.go:108] docker network force-systemd-env-347955 192.168.85.0/24 created
	I1014 21:04:24.264151  715133 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-347955" container
	I1014 21:04:24.264246  715133 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:04:24.280701  715133 cli_runner.go:164] Run: docker volume create force-systemd-env-347955 --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:04:24.298534  715133 oci.go:103] Successfully created a docker volume force-systemd-env-347955
	I1014 21:04:24.298629  715133 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-347955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --entrypoint /usr/bin/test -v force-systemd-env-347955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:04:24.866435  715133 oci.go:107] Successfully prepared a docker volume force-systemd-env-347955
	I1014 21:04:24.866483  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:24.866519  715133 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:04:24.866587  715133 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:04:29.254306  715133 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-347955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.387660998s)
	I1014 21:04:29.254340  715133 kic.go:203] duration metric: took 4.387833719s to extract preloaded images to volume ...
	W1014 21:04:29.254485  715133 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:04:29.254591  715133 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:04:29.306489  715133 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-347955 --name force-systemd-env-347955 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-347955 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-347955 --network force-systemd-env-347955 --ip 192.168.85.2 --volume force-systemd-env-347955:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:04:29.657033  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Running}}
	I1014 21:04:29.680613  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:29.704604  715133 cli_runner.go:164] Run: docker exec force-systemd-env-347955 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:04:29.756505  715133 oci.go:144] the created container "force-systemd-env-347955" has a running status.
	I1014 21:04:29.756533  715133 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa...
	I1014 21:04:29.969659  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 21:04:29.969708  715133 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:04:29.992619  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:30.018777  715133 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:04:30.018800  715133 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-347955 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:04:30.104773  715133 cli_runner.go:164] Run: docker container inspect force-systemd-env-347955 --format={{.State.Status}}
	I1014 21:04:30.131695  715133 machine.go:93] provisionDockerMachine start ...
	I1014 21:04:30.131801  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:30.161396  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:30.161751  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:30.161762  715133 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:04:30.162359  715133 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36080->127.0.0.1:33767: read: connection reset by peer
	I1014 21:04:33.325004  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-347955
	
	I1014 21:04:33.325053  715133 ubuntu.go:182] provisioning hostname "force-systemd-env-347955"
	I1014 21:04:33.325164  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:33.342467  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:33.342786  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:33.342804  715133 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-347955 && echo "force-systemd-env-347955" | sudo tee /etc/hostname
	I1014 21:04:33.497211  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-347955
	
	I1014 21:04:33.497297  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:33.514955  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:33.515276  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:33.515298  715133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-347955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-347955/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-347955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:04:33.661185  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:04:33.661214  715133 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:04:33.661247  715133 ubuntu.go:190] setting up certificates
	I1014 21:04:33.661257  715133 provision.go:84] configureAuth start
	I1014 21:04:33.661340  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:33.684264  715133 provision.go:143] copyHostCerts
	I1014 21:04:33.684329  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:04:33.684371  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:04:33.684384  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:04:33.684459  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:04:33.684544  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:04:33.684568  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:04:33.684577  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:04:33.684605  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:04:33.684647  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:04:33.684666  715133 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:04:33.684674  715133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:04:33.684697  715133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:04:33.684746  715133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-347955 san=[127.0.0.1 192.168.85.2 force-systemd-env-347955 localhost minikube]
	I1014 21:04:34.390686  715133 provision.go:177] copyRemoteCerts
	I1014 21:04:34.390777  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:04:34.390833  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.407284  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:34.511935  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 21:04:34.511990  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:04:34.529156  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 21:04:34.529218  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1014 21:04:34.546863  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 21:04:34.546939  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:04:34.564667  715133 provision.go:87] duration metric: took 903.386257ms to configureAuth
	I1014 21:04:34.564697  715133 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:04:34.564881  715133 config.go:182] Loaded profile config "force-systemd-env-347955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:04:34.565003  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.582097  715133 main.go:141] libmachine: Using SSH client type: native
	I1014 21:04:34.582406  715133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1014 21:04:34.582425  715133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:04:34.830671  715133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:04:34.830693  715133 machine.go:96] duration metric: took 4.698976956s to provisionDockerMachine
	I1014 21:04:34.830703  715133 client.go:171] duration metric: took 10.67636263s to LocalClient.Create
	I1014 21:04:34.830724  715133 start.go:167] duration metric: took 10.676443829s to libmachine.API.Create "force-systemd-env-347955"
	I1014 21:04:34.830733  715133 start.go:293] postStartSetup for "force-systemd-env-347955" (driver="docker")
	I1014 21:04:34.830742  715133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:04:34.830804  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:04:34.830842  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:34.849240  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:34.954210  715133 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:04:34.957954  715133 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:04:34.957987  715133 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:04:34.957999  715133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:04:34.958063  715133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:04:34.958148  715133 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:04:34.958160  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 21:04:34.958276  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:04:34.965883  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:04:34.983714  715133 start.go:296] duration metric: took 152.966774ms for postStartSetup
	I1014 21:04:34.984088  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:35.001755  715133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/config.json ...
	I1014 21:04:35.002100  715133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:04:35.002157  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.021758  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.121668  715133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:04:35.126863  715133 start.go:128] duration metric: took 10.976147826s to createHost
	I1014 21:04:35.126892  715133 start.go:83] releasing machines lock for "force-systemd-env-347955", held for 10.976288783s
	I1014 21:04:35.126973  715133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-347955
	I1014 21:04:35.144722  715133 ssh_runner.go:195] Run: cat /version.json
	I1014 21:04:35.144792  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.145044  715133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:04:35.145112  715133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-347955
	I1014 21:04:35.162942  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.177347  715133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/force-systemd-env-347955/id_rsa Username:docker}
	I1014 21:04:35.267998  715133 ssh_runner.go:195] Run: systemctl --version
	I1014 21:04:35.357856  715133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:04:35.396151  715133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:04:35.400974  715133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:04:35.401043  715133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:04:35.429925  715133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:04:35.429951  715133 start.go:495] detecting cgroup driver to use...
	I1014 21:04:35.429968  715133 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1014 21:04:35.430022  715133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:04:35.448694  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:04:35.461972  715133 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:04:35.462064  715133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:04:35.478655  715133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:04:35.497579  715133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:04:35.620703  715133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:04:35.752096  715133 docker.go:234] disabling docker service ...
	I1014 21:04:35.752163  715133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:04:35.795071  715133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:04:35.812904  715133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:04:35.945893  715133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:04:36.061606  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:04:36.076130  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:04:36.091652  715133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:04:36.091763  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.101319  715133 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 21:04:36.101444  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.110734  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.119741  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.129736  715133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:04:36.138242  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.147132  715133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.160913  715133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:04:36.169871  715133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:04:36.177537  715133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:04:36.185128  715133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:04:36.293184  715133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:04:36.423477  715133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:04:36.423618  715133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:04:36.427682  715133 start.go:563] Will wait 60s for crictl version
	I1014 21:04:36.427754  715133 ssh_runner.go:195] Run: which crictl
	I1014 21:04:36.431575  715133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:04:36.456901  715133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:04:36.456986  715133 ssh_runner.go:195] Run: crio --version
	I1014 21:04:36.488421  715133 ssh_runner.go:195] Run: crio --version
	I1014 21:04:36.519433  715133 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:04:36.522157  715133 cli_runner.go:164] Run: docker network inspect force-systemd-env-347955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:04:36.538814  715133 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:04:36.542791  715133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:04:36.552500  715133 kubeadm.go:883] updating cluster {Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:04:36.552612  715133 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:04:36.552670  715133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:04:36.585953  715133 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:04:36.585975  715133 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:04:36.586033  715133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:04:36.615167  715133 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:04:36.615188  715133 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:04:36.615195  715133 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:04:36.615283  715133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-347955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:04:36.615362  715133 ssh_runner.go:195] Run: crio config
	I1014 21:04:36.689628  715133 cni.go:84] Creating CNI manager for ""
	I1014 21:04:36.689650  715133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:04:36.689669  715133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:04:36.689692  715133 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-347955 NodeName:force-systemd-env-347955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:04:36.689831  715133 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-347955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:04:36.689910  715133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:04:36.698146  715133 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:04:36.698237  715133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:04:36.705932  715133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1014 21:04:36.719304  715133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:04:36.732900  715133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1014 21:04:36.746060  715133 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:04:36.749766  715133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:04:36.759484  715133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:04:36.881905  715133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:04:36.898363  715133 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955 for IP: 192.168.85.2
	I1014 21:04:36.898429  715133 certs.go:195] generating shared ca certs ...
	I1014 21:04:36.898467  715133 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:36.898639  715133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:04:36.898712  715133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:04:36.898751  715133 certs.go:257] generating profile certs ...
	I1014 21:04:36.898832  715133 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key
	I1014 21:04:36.898869  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt with IP's: []
	I1014 21:04:37.241019  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt ...
	I1014 21:04:37.241054  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.crt: {Name:mkc06be81668f24c404438108816b1c475f15366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.241260  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key ...
	I1014 21:04:37.241275  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/client.key: {Name:mk922553fe25eb2c06611c488df204aba4cd3b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.241377  715133 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba
	I1014 21:04:37.241397  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:04:37.424345  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba ...
	I1014 21:04:37.424375  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba: {Name:mk6371f088862bcbe09cf9fb59da3a2d83b274d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.424561  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba ...
	I1014 21:04:37.424576  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba: {Name:mk0316c1f89eff3d75875036e8bb28dace7ec40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.424662  715133 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt.32e408ba -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt
	I1014 21:04:37.424747  715133 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key.32e408ba -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key
	I1014 21:04:37.424808  715133 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key
	I1014 21:04:37.424826  715133 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt with IP's: []
	I1014 21:04:37.936954  715133 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt ...
	I1014 21:04:37.936987  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt: {Name:mka78d1eceb1edeb04c372f6643fa742b67d5787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.937170  715133 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key ...
	I1014 21:04:37.937184  715133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key: {Name:mkc87ee9795d29ad6e431b66fc6016ff292df6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:04:37.937267  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 21:04:37.937289  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 21:04:37.937303  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 21:04:37.937322  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 21:04:37.937342  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 21:04:37.937354  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 21:04:37.937373  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 21:04:37.937384  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 21:04:37.937444  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:04:37.937492  715133 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:04:37.937505  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:04:37.937530  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:04:37.937557  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:04:37.937583  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:04:37.937632  715133 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:04:37.937662  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:37.937679  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 21:04:37.937691  715133 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 21:04:37.938290  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:04:37.957095  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:04:37.976185  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:04:37.994131  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:04:38.019056  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 21:04:38.039425  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:04:38.059547  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:04:38.079536  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/force-systemd-env-347955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:04:38.099856  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:04:38.119952  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:04:38.139213  715133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:04:38.157975  715133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:04:38.171318  715133 ssh_runner.go:195] Run: openssl version
	I1014 21:04:38.177797  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:04:38.186991  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.191027  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.191098  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:04:38.232410  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:04:38.241190  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:04:38.249792  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.253856  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.253926  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:04:38.300234  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:04:38.308870  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:04:38.317772  715133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.321785  715133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.321853  715133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:04:38.363445  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:04:38.372017  715133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:04:38.375579  715133 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:04:38.375634  715133 kubeadm.go:400] StartCluster: {Name:force-systemd-env-347955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-347955 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:04:38.375714  715133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:04:38.375775  715133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:04:38.404197  715133 cri.go:89] found id: ""
	I1014 21:04:38.404270  715133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:04:38.412598  715133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:04:38.420719  715133 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:04:38.420826  715133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:04:38.429151  715133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:04:38.429174  715133 kubeadm.go:157] found existing configuration files:
	
	I1014 21:04:38.429230  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:04:38.437397  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:04:38.437496  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:04:38.445514  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:04:38.453560  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:04:38.453627  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:04:38.461524  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:04:38.469409  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:04:38.469518  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:04:38.476879  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:04:38.484772  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:04:38.484849  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:04:38.492125  715133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:04:38.569175  715133 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:04:38.569419  715133 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:04:38.645760  715133 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:08:47.972269  715133 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 21:08:47.972392  715133 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:08:47.975912  715133 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:08:47.975972  715133 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:08:47.976119  715133 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:08:47.976204  715133 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:08:47.976253  715133 kubeadm.go:318] OS: Linux
	I1014 21:08:47.976354  715133 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:08:47.976432  715133 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:08:47.976487  715133 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:08:47.976545  715133 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:08:47.976599  715133 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:08:47.976672  715133 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:08:47.976724  715133 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:08:47.976782  715133 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:08:47.976840  715133 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:08:47.976922  715133 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:08:47.977031  715133 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:08:47.977128  715133 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:08:47.977201  715133 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:08:47.981297  715133 out.go:252]   - Generating certificates and keys ...
	I1014 21:08:47.981409  715133 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:08:47.981486  715133 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:08:47.981564  715133 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:08:47.981631  715133 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:08:47.981709  715133 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:08:47.981771  715133 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:08:47.981837  715133 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:08:47.981986  715133 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:08:47.982048  715133 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:08:47.982191  715133 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:08:47.982268  715133 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:08:47.982342  715133 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:08:47.982396  715133 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:08:47.982461  715133 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:08:47.982521  715133 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:08:47.982588  715133 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:08:47.982652  715133 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:08:47.982726  715133 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:08:47.982802  715133 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:08:47.982898  715133 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:08:47.982975  715133 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:08:47.985888  715133 out.go:252]   - Booting up control plane ...
	I1014 21:08:47.986020  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:08:47.986118  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:08:47.986200  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:08:47.986328  715133 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:08:47.986442  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:08:47.986555  715133 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:08:47.986647  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:08:47.986695  715133 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:08:47.986866  715133 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:08:47.987025  715133 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:08:47.987104  715133 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50184272s
	I1014 21:08:47.987213  715133 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:08:47.987317  715133 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:08:47.987430  715133 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:08:47.987524  715133 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:08:47.987607  715133 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000155897s
	I1014 21:08:47.987689  715133 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000452065s
	I1014 21:08:47.987782  715133 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000190219s
	I1014 21:08:47.987792  715133 kubeadm.go:318] 
	I1014 21:08:47.987887  715133 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:08:47.987976  715133 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:08:47.988078  715133 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:08:47.988180  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:08:47.988261  715133 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:08:47.988361  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	W1014 21:08:47.988493  715133 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50184272s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000155897s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000452065s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000190219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-347955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50184272s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000155897s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000452065s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000190219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 21:08:47.988583  715133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 21:08:47.988845  715133 kubeadm.go:318] 
	I1014 21:08:48.545522  715133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:08:48.559072  715133 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:08:48.559143  715133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:08:48.567048  715133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:08:48.567068  715133 kubeadm.go:157] found existing configuration files:
	
	I1014 21:08:48.567121  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:08:48.574764  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:08:48.574847  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:08:48.582554  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:08:48.590361  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:08:48.590449  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:08:48.597910  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:08:48.605579  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:08:48.605643  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:08:48.613326  715133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:08:48.621004  715133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:08:48.621114  715133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:08:48.629121  715133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:08:48.671322  715133 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:08:48.671386  715133 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:08:48.694184  715133 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:08:48.694310  715133 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:08:48.694378  715133 kubeadm.go:318] OS: Linux
	I1014 21:08:48.694451  715133 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:08:48.694527  715133 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:08:48.694602  715133 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:08:48.694680  715133 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:08:48.694755  715133 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:08:48.694861  715133 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:08:48.694940  715133 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:08:48.695023  715133 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:08:48.695104  715133 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:08:48.766821  715133 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:08:48.766941  715133 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:08:48.767070  715133 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:08:48.780828  715133 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:08:48.787567  715133 out.go:252]   - Generating certificates and keys ...
	I1014 21:08:48.787732  715133 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:08:48.787839  715133 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:08:48.787962  715133 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 21:08:48.788061  715133 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 21:08:48.788178  715133 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 21:08:48.788269  715133 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 21:08:48.788388  715133 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 21:08:48.788502  715133 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 21:08:48.788612  715133 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 21:08:48.788738  715133 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 21:08:48.788810  715133 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 21:08:48.788905  715133 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:08:49.061981  715133 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:08:49.303765  715133 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:08:49.518251  715133 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:08:49.957458  715133 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:08:50.488659  715133 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:08:50.489275  715133 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:08:50.492880  715133 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:08:50.495976  715133 out.go:252]   - Booting up control plane ...
	I1014 21:08:50.496093  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:08:50.496177  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:08:50.496939  715133 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:08:50.512756  715133 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:08:50.512945  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:08:50.521230  715133 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:08:50.521507  715133 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:08:50.521784  715133 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:08:50.655527  715133 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:08:50.655651  715133 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:08:52.156909  715133 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50149622s
	I1014 21:08:52.160621  715133 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:08:52.160949  715133 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:08:52.161228  715133 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:08:52.161326  715133 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:12:52.161143  715133 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	I1014 21:12:52.161260  715133 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	I1014 21:12:52.161372  715133 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	I1014 21:12:52.161383  715133 kubeadm.go:318] 
	I1014 21:12:52.161482  715133 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:12:52.161580  715133 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:12:52.161674  715133 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:12:52.161771  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:12:52.161848  715133 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:12:52.161945  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:12:52.161951  715133 kubeadm.go:318] 
	I1014 21:12:52.166087  715133 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:12:52.166871  715133 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:12:52.167068  715133 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:12:52.168632  715133 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 21:12:52.168793  715133 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:12:52.168872  715133 kubeadm.go:402] duration metric: took 8m13.793238745s to StartCluster
	I1014 21:12:52.168943  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:12:52.169038  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:12:52.193837  715133 cri.go:89] found id: ""
	I1014 21:12:52.193872  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.193881  715133 logs.go:284] No container was found matching "kube-apiserver"
	I1014 21:12:52.193888  715133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:12:52.193946  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:12:52.224652  715133 cri.go:89] found id: ""
	I1014 21:12:52.224677  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.224686  715133 logs.go:284] No container was found matching "etcd"
	I1014 21:12:52.224692  715133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:12:52.224755  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:12:52.255383  715133 cri.go:89] found id: ""
	I1014 21:12:52.255408  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.255417  715133 logs.go:284] No container was found matching "coredns"
	I1014 21:12:52.255424  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:12:52.255495  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:12:52.284182  715133 cri.go:89] found id: ""
	I1014 21:12:52.284208  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.284216  715133 logs.go:284] No container was found matching "kube-scheduler"
	I1014 21:12:52.284222  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:12:52.284279  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:12:52.309744  715133 cri.go:89] found id: ""
	I1014 21:12:52.309768  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.309777  715133 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:12:52.309783  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:12:52.309858  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:12:52.346466  715133 cri.go:89] found id: ""
	I1014 21:12:52.346491  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.346500  715133 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 21:12:52.346507  715133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:12:52.346563  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:12:52.373318  715133 cri.go:89] found id: ""
	I1014 21:12:52.373352  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.373361  715133 logs.go:284] No container was found matching "kindnet"
	I1014 21:12:52.373388  715133 logs.go:123] Gathering logs for kubelet ...
	I1014 21:12:52.373405  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:12:52.465758  715133 logs.go:123] Gathering logs for dmesg ...
	I1014 21:12:52.465793  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:12:52.485569  715133 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:12:52.485600  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:12:52.556813  715133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:12:52.548216    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.548812    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550302    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550864    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.552616    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 21:12:52.548216    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.548812    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550302    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550864    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.552616    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:12:52.556837  715133 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:12:52.556851  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:12:52.632570  715133 logs.go:123] Gathering logs for container status ...
	I1014 21:12:52.632650  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 21:12:52.663854  715133 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 21:12:52.663908  715133 out.go:285] * 
	* 
	W1014 21:12:52.663968  715133 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:12:52.663986  715133 out.go:285] * 
	* 
	W1014 21:12:52.666155  715133 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:12:52.672877  715133 out.go:203] 
	W1014 21:12:52.676723  715133 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:12:52.676747  715133 out.go:285] * 
	* 
	I1014 21:12:52.681768  715133 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-14 21:12:52.736801597 +0000 UTC m=+4530.154955336
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-347955
helpers_test.go:243: (dbg) docker inspect force-systemd-env-347955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c",
	        "Created": "2025-10-14T21:04:29.322757894Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 715535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:04:29.415913207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c/hosts",
	        "LogPath": "/var/lib/docker/containers/deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c/deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c-json.log",
	        "Name": "/force-systemd-env-347955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-347955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-347955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "deec3e5f876a8ee784b2af79f7d77d6517d47acaca559d1e94661fe0471d9e3c",
	                "LowerDir": "/var/lib/docker/overlay2/e996b5062a577c1ee0976810b19e3cade588654197f6cf8dfdcdabb19e95fd0f-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e996b5062a577c1ee0976810b19e3cade588654197f6cf8dfdcdabb19e95fd0f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e996b5062a577c1ee0976810b19e3cade588654197f6cf8dfdcdabb19e95fd0f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e996b5062a577c1ee0976810b19e3cade588654197f6cf8dfdcdabb19e95fd0f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-347955",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-347955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-347955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-347955",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-347955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1688f70b3484f945ee55d8abb31a16d6ecaa22c723e65f9bec471aedbeaa7ad1",
	            "SandboxKey": "/var/run/docker/netns/1688f70b3484",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33767"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33769"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-347955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:88:bd:31:78:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb78125c803178156926c3949b86fb66604a0024ea69ee4a49cd2e2f5fe52f6f",
	                    "EndpointID": "fa46e7dd501bf4a782ea89e4412460af20602241f886f73488757b64f652480a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-347955",
	                        "deec3e5f876a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-347955 -n force-systemd-env-347955
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-347955 -n force-systemd-env-347955: exit status 6 (330.932279ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 21:12:53.082263  722263 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-347955" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-347955 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-810782 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status docker --all --full --no-pager                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat docker --no-pager                                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/docker/daemon.json                                                          │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo docker system info                                                                   │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cri-dockerd --version                                                                │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat containerd --no-pager                                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/containerd/config.toml                                                      │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo containerd config dump                                                               │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status crio --all --full --no-pager                                        │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat crio --no-pager                                                        │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo crio config                                                                          │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-347955  │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-162846 │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ delete  │ -p force-systemd-flag-162846                                                                               │ force-systemd-flag-162846 │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-341199    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:11:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:11:51.904569  719703 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:11:51.904683  719703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:11:51.904687  719703 out.go:374] Setting ErrFile to fd 2...
	I1014 21:11:51.904691  719703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:11:51.904965  719703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:11:51.905375  719703 out.go:368] Setting JSON to false
	I1014 21:11:51.906277  719703 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14062,"bootTime":1760462250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:11:51.906339  719703 start.go:141] virtualization:  
	I1014 21:11:51.910229  719703 out.go:179] * [cert-expiration-341199] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:11:51.915074  719703 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:11:51.915183  719703 notify.go:220] Checking for updates...
	I1014 21:11:51.922478  719703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:11:51.925906  719703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:11:51.929269  719703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:11:51.932631  719703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:11:51.935886  719703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:11:51.939420  719703 config.go:182] Loaded profile config "force-systemd-env-347955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:11:51.939522  719703 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:11:51.963295  719703 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:11:51.963415  719703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:11:52.023183  719703 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:11:52.013172782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:11:52.023289  719703 docker.go:318] overlay module found
	I1014 21:11:52.026635  719703 out.go:179] * Using the docker driver based on user configuration
	I1014 21:11:52.029675  719703 start.go:305] selected driver: docker
	I1014 21:11:52.029686  719703 start.go:925] validating driver "docker" against <nil>
	I1014 21:11:52.029699  719703 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:11:52.030493  719703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:11:52.087941  719703 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:11:52.077507584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:11:52.088096  719703 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:11:52.088368  719703 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 21:11:52.091589  719703 out.go:179] * Using Docker driver with root privileges
	I1014 21:11:52.094595  719703 cni.go:84] Creating CNI manager for ""
	I1014 21:11:52.094676  719703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:11:52.094684  719703 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:11:52.094775  719703 start.go:349] cluster config:
	{Name:cert-expiration-341199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:11:52.097994  719703 out.go:179] * Starting "cert-expiration-341199" primary control-plane node in "cert-expiration-341199" cluster
	I1014 21:11:52.100904  719703 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:11:52.103905  719703 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:11:52.106791  719703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:11:52.106846  719703 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:11:52.106858  719703 cache.go:58] Caching tarball of preloaded images
	I1014 21:11:52.106878  719703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:11:52.106954  719703 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:11:52.106963  719703 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:11:52.107069  719703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/config.json ...
	I1014 21:11:52.107086  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/config.json: {Name:mk3af645f9705b29671158250e3d071bc4506061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:11:52.126819  719703 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:11:52.126832  719703 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:11:52.126866  719703 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:11:52.126888  719703 start.go:360] acquireMachinesLock for cert-expiration-341199: {Name:mkdce0b9ecb644e55762c2ccfb433dbd61ed6665 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:11:52.127000  719703 start.go:364] duration metric: took 97.856µs to acquireMachinesLock for "cert-expiration-341199"
	I1014 21:11:52.127026  719703 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-341199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341199 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:11:52.127092  719703 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:11:52.130631  719703 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:11:52.130880  719703 start.go:159] libmachine.API.Create for "cert-expiration-341199" (driver="docker")
	I1014 21:11:52.130925  719703 client.go:168] LocalClient.Create starting
	I1014 21:11:52.131016  719703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:11:52.131052  719703 main.go:141] libmachine: Decoding PEM data...
	I1014 21:11:52.131064  719703 main.go:141] libmachine: Parsing certificate...
	I1014 21:11:52.131124  719703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:11:52.131142  719703 main.go:141] libmachine: Decoding PEM data...
	I1014 21:11:52.131151  719703 main.go:141] libmachine: Parsing certificate...
	I1014 21:11:52.131545  719703 cli_runner.go:164] Run: docker network inspect cert-expiration-341199 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:11:52.147856  719703 cli_runner.go:211] docker network inspect cert-expiration-341199 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:11:52.147948  719703 network_create.go:284] running [docker network inspect cert-expiration-341199] to gather additional debugging logs...
	I1014 21:11:52.147963  719703 cli_runner.go:164] Run: docker network inspect cert-expiration-341199
	W1014 21:11:52.165825  719703 cli_runner.go:211] docker network inspect cert-expiration-341199 returned with exit code 1
	I1014 21:11:52.165862  719703 network_create.go:287] error running [docker network inspect cert-expiration-341199]: docker network inspect cert-expiration-341199: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-341199 not found
	I1014 21:11:52.165874  719703 network_create.go:289] output of [docker network inspect cert-expiration-341199]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-341199 not found
	
	** /stderr **
	I1014 21:11:52.165970  719703 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:11:52.181804  719703 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:11:52.182120  719703 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:11:52.182478  719703 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:11:52.182929  719703 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0ebb0}
	I1014 21:11:52.182947  719703 network_create.go:124] attempt to create docker network cert-expiration-341199 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:11:52.183013  719703 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-341199 cert-expiration-341199
	I1014 21:11:52.249929  719703 network_create.go:108] docker network cert-expiration-341199 192.168.76.0/24 created
	I1014 21:11:52.249952  719703 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-341199" container
	I1014 21:11:52.250033  719703 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:11:52.266345  719703 cli_runner.go:164] Run: docker volume create cert-expiration-341199 --label name.minikube.sigs.k8s.io=cert-expiration-341199 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:11:52.284196  719703 oci.go:103] Successfully created a docker volume cert-expiration-341199
	I1014 21:11:52.284280  719703 cli_runner.go:164] Run: docker run --rm --name cert-expiration-341199-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-341199 --entrypoint /usr/bin/test -v cert-expiration-341199:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:11:52.830424  719703 oci.go:107] Successfully prepared a docker volume cert-expiration-341199
	I1014 21:11:52.830453  719703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:11:52.830470  719703 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:11:52.830551  719703 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-341199:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:11:57.262363  719703 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-341199:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431777681s)
	I1014 21:11:57.262382  719703 kic.go:203] duration metric: took 4.431909326s to extract preloaded images to volume ...
	W1014 21:11:57.262532  719703 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:11:57.262643  719703 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:11:57.324165  719703 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-341199 --name cert-expiration-341199 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-341199 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-341199 --network cert-expiration-341199 --ip 192.168.76.2 --volume cert-expiration-341199:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:11:57.636891  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Running}}
	I1014 21:11:57.661160  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:11:57.688455  719703 cli_runner.go:164] Run: docker exec cert-expiration-341199 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:11:57.741104  719703 oci.go:144] the created container "cert-expiration-341199" has a running status.
	I1014 21:11:57.741140  719703 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa...
	I1014 21:11:58.064626  719703 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:11:58.100795  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:11:58.128686  719703 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:11:58.128698  719703 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-341199 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:11:58.197866  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:11:58.232992  719703 machine.go:93] provisionDockerMachine start ...
	I1014 21:11:58.233097  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:11:58.264171  719703 main.go:141] libmachine: Using SSH client type: native
	I1014 21:11:58.264600  719703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1014 21:11:58.264609  719703 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:11:58.265589  719703 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42974->127.0.0.1:33772: read: connection reset by peer
	I1014 21:12:01.412271  719703 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-341199
	
	I1014 21:12:01.412285  719703 ubuntu.go:182] provisioning hostname "cert-expiration-341199"
	I1014 21:12:01.412372  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:01.430154  719703 main.go:141] libmachine: Using SSH client type: native
	I1014 21:12:01.430466  719703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1014 21:12:01.430476  719703 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-341199 && echo "cert-expiration-341199" | sudo tee /etc/hostname
	I1014 21:12:01.586111  719703 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-341199
	
	I1014 21:12:01.586196  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:01.603799  719703 main.go:141] libmachine: Using SSH client type: native
	I1014 21:12:01.604101  719703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1014 21:12:01.604116  719703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-341199' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-341199/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-341199' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:12:01.756663  719703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:12:01.756680  719703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:12:01.756711  719703 ubuntu.go:190] setting up certificates
	I1014 21:12:01.756720  719703 provision.go:84] configureAuth start
	I1014 21:12:01.756784  719703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-341199
	I1014 21:12:01.777564  719703 provision.go:143] copyHostCerts
	I1014 21:12:01.777630  719703 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:12:01.777637  719703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:12:01.777713  719703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:12:01.777806  719703 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:12:01.777811  719703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:12:01.777835  719703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:12:01.777884  719703 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:12:01.777887  719703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:12:01.777908  719703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:12:01.777952  719703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-341199 san=[127.0.0.1 192.168.76.2 cert-expiration-341199 localhost minikube]
	I1014 21:12:02.775313  719703 provision.go:177] copyRemoteCerts
	I1014 21:12:02.775366  719703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:12:02.775406  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:02.796834  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:02.901094  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:12:02.920070  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 21:12:02.952903  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 21:12:02.977348  719703 provision.go:87] duration metric: took 1.220615354s to configureAuth
	I1014 21:12:02.977364  719703 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:12:02.977547  719703 config.go:182] Loaded profile config "cert-expiration-341199": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:12:02.977648  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:02.995877  719703 main.go:141] libmachine: Using SSH client type: native
	I1014 21:12:02.996180  719703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1014 21:12:02.996193  719703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:12:03.256368  719703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:12:03.256382  719703 machine.go:96] duration metric: took 5.023372442s to provisionDockerMachine
	I1014 21:12:03.256390  719703 client.go:171] duration metric: took 11.125460218s to LocalClient.Create
	I1014 21:12:03.256412  719703 start.go:167] duration metric: took 11.125535156s to libmachine.API.Create "cert-expiration-341199"
	I1014 21:12:03.256434  719703 start.go:293] postStartSetup for "cert-expiration-341199" (driver="docker")
	I1014 21:12:03.256443  719703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:12:03.256521  719703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:12:03.256560  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:03.274340  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:03.376418  719703 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:12:03.379707  719703 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:12:03.379726  719703 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:12:03.379744  719703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:12:03.379799  719703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:12:03.379879  719703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:12:03.379988  719703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:12:03.387569  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:12:03.404802  719703 start.go:296] duration metric: took 148.35377ms for postStartSetup
	I1014 21:12:03.405160  719703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-341199
	I1014 21:12:03.421854  719703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/config.json ...
	I1014 21:12:03.422141  719703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:12:03.422180  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:03.443544  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:03.541324  719703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:12:03.546100  719703 start.go:128] duration metric: took 11.41899346s to createHost
	I1014 21:12:03.546114  719703 start.go:83] releasing machines lock for "cert-expiration-341199", held for 11.419107095s
	I1014 21:12:03.546194  719703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-341199
	I1014 21:12:03.562945  719703 ssh_runner.go:195] Run: cat /version.json
	I1014 21:12:03.562967  719703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:12:03.562986  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:03.563029  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:03.580589  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:03.581765  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:03.680159  719703 ssh_runner.go:195] Run: systemctl --version
	I1014 21:12:03.769671  719703 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:12:03.806207  719703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:12:03.810685  719703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:12:03.810751  719703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:12:03.838373  719703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:12:03.838386  719703 start.go:495] detecting cgroup driver to use...
	I1014 21:12:03.838418  719703 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:12:03.838470  719703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:12:03.855064  719703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:12:03.868531  719703 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:12:03.868587  719703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:12:03.886366  719703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:12:03.905144  719703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:12:04.027050  719703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:12:04.163231  719703 docker.go:234] disabling docker service ...
	I1014 21:12:04.163309  719703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:12:04.186825  719703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:12:04.201706  719703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:12:04.315780  719703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:12:04.439626  719703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:12:04.454456  719703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:12:04.470009  719703 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:12:04.470087  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.479178  719703 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:12:04.479243  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.488479  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.497616  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.506729  719703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:12:04.515580  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.524362  719703 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.538394  719703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:12:04.547307  719703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:12:04.554911  719703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:12:04.562871  719703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:12:04.682260  719703 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:12:04.808239  719703 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:12:04.808298  719703 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:12:04.812114  719703 start.go:563] Will wait 60s for crictl version
	I1014 21:12:04.812170  719703 ssh_runner.go:195] Run: which crictl
	I1014 21:12:04.815871  719703 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:12:04.840905  719703 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:12:04.841014  719703 ssh_runner.go:195] Run: crio --version
	I1014 21:12:04.869062  719703 ssh_runner.go:195] Run: crio --version
	I1014 21:12:04.901989  719703 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:12:04.904912  719703 cli_runner.go:164] Run: docker network inspect cert-expiration-341199 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:12:04.921407  719703 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:12:04.925455  719703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:12:04.935328  719703 kubeadm.go:883] updating cluster {Name:cert-expiration-341199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341199 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:12:04.935423  719703 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:12:04.935476  719703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:12:04.973330  719703 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:12:04.973342  719703 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:12:04.973398  719703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:12:05.000741  719703 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:12:05.000754  719703 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:12:05.000761  719703 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:12:05.000874  719703 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-341199 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:12:05.000964  719703 ssh_runner.go:195] Run: crio config
	I1014 21:12:05.063107  719703 cni.go:84] Creating CNI manager for ""
	I1014 21:12:05.063127  719703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:12:05.063146  719703 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:12:05.063169  719703 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-341199 NodeName:cert-expiration-341199 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:12:05.063309  719703 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-341199"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:12:05.063389  719703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:12:05.072011  719703 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:12:05.072080  719703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:12:05.080674  719703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1014 21:12:05.095695  719703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:12:05.110525  719703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1014 21:12:05.124491  719703 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:12:05.128202  719703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:12:05.138387  719703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:12:05.255706  719703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:12:05.271741  719703 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199 for IP: 192.168.76.2
	I1014 21:12:05.271753  719703 certs.go:195] generating shared ca certs ...
	I1014 21:12:05.271768  719703 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:05.271905  719703 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:12:05.271947  719703 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:12:05.271954  719703 certs.go:257] generating profile certs ...
	I1014 21:12:05.272007  719703 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.key
	I1014 21:12:05.272016  719703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.crt with IP's: []
	I1014 21:12:05.431935  719703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.crt ...
	I1014 21:12:05.431951  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.crt: {Name:mk38c66944b04f386730d3e6c1700ce7da8e638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:05.432170  719703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.key ...
	I1014 21:12:05.432178  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/client.key: {Name:mk9c7120c2d48b6d27ba850c5ae30403220b4755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:05.432283  719703 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key.c1b5bd1e
	I1014 21:12:05.432298  719703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt.c1b5bd1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1014 21:12:05.706463  719703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt.c1b5bd1e ...
	I1014 21:12:05.706479  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt.c1b5bd1e: {Name:mk43571028d031e55986b766dbe1acf48620f45e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:05.706681  719703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key.c1b5bd1e ...
	I1014 21:12:05.706690  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key.c1b5bd1e: {Name:mk676bd19c707631f13b74fd6afebed371b7ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:05.706781  719703 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt.c1b5bd1e -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt
	I1014 21:12:05.706856  719703 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key.c1b5bd1e -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key
	I1014 21:12:05.706907  719703 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.key
	I1014 21:12:05.706919  719703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.crt with IP's: []
	I1014 21:12:06.553312  719703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.crt ...
	I1014 21:12:06.553329  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.crt: {Name:mk64f23707284ca2bb0a4eede4f0021ae792f8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:06.553528  719703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.key ...
	I1014 21:12:06.553536  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.key: {Name:mk57f4e5b0ef5c32f98cd89d358da1a51e1b5346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:06.553756  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:12:06.553794  719703 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:12:06.553801  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:12:06.553826  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:12:06.553846  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:12:06.553868  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:12:06.553906  719703 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:12:06.554513  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:12:06.574066  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:12:06.594364  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:12:06.614087  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:12:06.632629  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 21:12:06.651377  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:12:06.669729  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:12:06.688899  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/cert-expiration-341199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:12:06.706398  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:12:06.724238  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:12:06.742086  719703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:12:06.760958  719703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:12:06.774552  719703 ssh_runner.go:195] Run: openssl version
	I1014 21:12:06.780845  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:12:06.791297  719703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:12:06.795503  719703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:12:06.795568  719703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:12:06.837577  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:12:06.847260  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:12:06.855792  719703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:12:06.859639  719703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:12:06.859701  719703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:12:06.901252  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:12:06.911309  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:12:06.920229  719703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:12:06.924583  719703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:12:06.924641  719703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:12:06.971259  719703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:12:06.980618  719703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:12:06.984265  719703 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:12:06.984308  719703 kubeadm.go:400] StartCluster: {Name:cert-expiration-341199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341199 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:12:06.984383  719703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:12:06.984456  719703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:12:07.014038  719703 cri.go:89] found id: ""
	I1014 21:12:07.014105  719703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:12:07.023085  719703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:12:07.030946  719703 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:12:07.031000  719703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:12:07.039106  719703 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:12:07.039124  719703 kubeadm.go:157] found existing configuration files:
	
	I1014 21:12:07.039175  719703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:12:07.047195  719703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:12:07.047255  719703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:12:07.055433  719703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:12:07.063517  719703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:12:07.063575  719703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:12:07.071026  719703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:12:07.079214  719703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:12:07.079272  719703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:12:07.087704  719703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:12:07.095522  719703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:12:07.095577  719703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:12:07.103360  719703 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:12:07.174984  719703 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:12:07.175221  719703 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:12:07.246562  719703 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:12:24.882703  719703 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:12:24.882753  719703 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:12:24.882843  719703 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:12:24.882900  719703 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:12:24.882944  719703 kubeadm.go:318] OS: Linux
	I1014 21:12:24.882990  719703 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:12:24.883040  719703 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:12:24.883088  719703 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:12:24.883137  719703 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:12:24.883186  719703 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:12:24.883235  719703 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:12:24.883291  719703 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:12:24.883341  719703 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:12:24.883388  719703 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:12:24.883462  719703 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:12:24.883563  719703 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:12:24.883656  719703 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:12:24.883719  719703 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:12:24.886702  719703 out.go:252]   - Generating certificates and keys ...
	I1014 21:12:24.886802  719703 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:12:24.886873  719703 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:12:24.886941  719703 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:12:24.887006  719703 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:12:24.887090  719703 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:12:24.887156  719703 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:12:24.887219  719703 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:12:24.887363  719703 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-341199 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:12:24.887421  719703 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:12:24.887552  719703 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-341199 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:12:24.887620  719703 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:12:24.887684  719703 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:12:24.887730  719703 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:12:24.887802  719703 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:12:24.887858  719703 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:12:24.887917  719703 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:12:24.887979  719703 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:12:24.888051  719703 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:12:24.888115  719703 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:12:24.888203  719703 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:12:24.888275  719703 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:12:24.893113  719703 out.go:252]   - Booting up control plane ...
	I1014 21:12:24.893227  719703 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:12:24.893334  719703 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:12:24.893436  719703 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:12:24.893553  719703 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:12:24.893670  719703 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:12:24.893786  719703 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:12:24.893887  719703 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:12:24.893941  719703 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:12:24.894075  719703 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:12:24.894188  719703 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:12:24.894253  719703 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000856155s
	I1014 21:12:24.894354  719703 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:12:24.894444  719703 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1014 21:12:24.894549  719703 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:12:24.894639  719703 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:12:24.894730  719703 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.479299767s
	I1014 21:12:24.894803  719703 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.406918466s
	I1014 21:12:24.894885  719703 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.502690554s
	I1014 21:12:24.895007  719703 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:12:24.895152  719703 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:12:24.895319  719703 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:12:24.895549  719703 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-341199 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:12:24.895612  719703 kubeadm.go:318] [bootstrap-token] Using token: qf1w61.lsljtoxsgzdu6blj
	I1014 21:12:24.900458  719703 out.go:252]   - Configuring RBAC rules ...
	I1014 21:12:24.900594  719703 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:12:24.900685  719703 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:12:24.900838  719703 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:12:24.900973  719703 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:12:24.901092  719703 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:12:24.901209  719703 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:12:24.901338  719703 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:12:24.901404  719703 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:12:24.901456  719703 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:12:24.901460  719703 kubeadm.go:318] 
	I1014 21:12:24.901522  719703 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:12:24.901525  719703 kubeadm.go:318] 
	I1014 21:12:24.901605  719703 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:12:24.901608  719703 kubeadm.go:318] 
	I1014 21:12:24.901642  719703 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:12:24.901705  719703 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:12:24.901758  719703 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:12:24.901761  719703 kubeadm.go:318] 
	I1014 21:12:24.901817  719703 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:12:24.901820  719703 kubeadm.go:318] 
	I1014 21:12:24.901869  719703 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:12:24.901872  719703 kubeadm.go:318] 
	I1014 21:12:24.901933  719703 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:12:24.902022  719703 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:12:24.902100  719703 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:12:24.902103  719703 kubeadm.go:318] 
	I1014 21:12:24.902192  719703 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:12:24.902278  719703 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:12:24.902282  719703 kubeadm.go:318] 
	I1014 21:12:24.902379  719703 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qf1w61.lsljtoxsgzdu6blj \
	I1014 21:12:24.902493  719703 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:12:24.902513  719703 kubeadm.go:318] 	--control-plane 
	I1014 21:12:24.902516  719703 kubeadm.go:318] 
	I1014 21:12:24.902614  719703 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:12:24.902622  719703 kubeadm.go:318] 
	I1014 21:12:24.902723  719703 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qf1w61.lsljtoxsgzdu6blj \
	I1014 21:12:24.902843  719703 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:12:24.902865  719703 cni.go:84] Creating CNI manager for ""
	I1014 21:12:24.902873  719703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:12:24.907808  719703 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:12:24.910740  719703 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:12:24.914999  719703 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:12:24.915010  719703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:12:24.931563  719703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 21:12:25.241999  719703 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:12:25.242122  719703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:12:25.242199  719703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-341199 minikube.k8s.io/updated_at=2025_10_14T21_12_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=cert-expiration-341199 minikube.k8s.io/primary=true
	I1014 21:12:25.503661  719703 ops.go:34] apiserver oom_adj: -16
	I1014 21:12:25.503675  719703 kubeadm.go:1113] duration metric: took 261.603828ms to wait for elevateKubeSystemPrivileges
	I1014 21:12:25.503700  719703 kubeadm.go:402] duration metric: took 18.519395892s to StartCluster
	I1014 21:12:25.503737  719703 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:25.503821  719703 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:12:25.504479  719703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:12:25.504717  719703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:12:25.504729  719703 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:12:25.504972  719703 config.go:182] Loaded profile config "cert-expiration-341199": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:12:25.505010  719703 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:12:25.505070  719703 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-341199"
	I1014 21:12:25.505083  719703 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-341199"
	I1014 21:12:25.505105  719703 host.go:66] Checking if "cert-expiration-341199" exists ...
	I1014 21:12:25.505557  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:12:25.506056  719703 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-341199"
	I1014 21:12:25.506074  719703 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-341199"
	I1014 21:12:25.506338  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:12:25.510185  719703 out.go:179] * Verifying Kubernetes components...
	I1014 21:12:25.518818  719703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:12:25.540029  719703 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-341199"
	I1014 21:12:25.540057  719703 host.go:66] Checking if "cert-expiration-341199" exists ...
	I1014 21:12:25.540571  719703 cli_runner.go:164] Run: docker container inspect cert-expiration-341199 --format={{.State.Status}}
	I1014 21:12:25.548941  719703 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:12:25.551993  719703 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:12:25.552005  719703 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:12:25.552078  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:25.576473  719703 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:12:25.576494  719703 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:12:25.576561  719703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341199
	I1014 21:12:25.596559  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:25.620238  719703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/cert-expiration-341199/id_rsa Username:docker}
	I1014 21:12:25.768942  719703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:12:25.794809  719703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:12:25.827278  719703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:12:25.876540  719703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:12:26.174276  719703 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1014 21:12:26.176179  719703 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:12:26.176376  719703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:12:26.450248  719703 api_server.go:72] duration metric: took 945.492394ms to wait for apiserver process to appear ...
	I1014 21:12:26.450259  719703 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:12:26.450274  719703 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:12:26.453280  719703 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 21:12:26.456137  719703 addons.go:514] duration metric: took 951.111534ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 21:12:26.462712  719703 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 21:12:26.464434  719703 api_server.go:141] control plane version: v1.34.1
	I1014 21:12:26.464450  719703 api_server.go:131] duration metric: took 14.185873ms to wait for apiserver health ...
	I1014 21:12:26.464459  719703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:12:26.467420  719703 system_pods.go:59] 5 kube-system pods found
	I1014 21:12:26.467446  719703 system_pods.go:61] "etcd-cert-expiration-341199" [3e6e2425-d79c-4551-8d87-eb6ac42d4faa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:12:26.467455  719703 system_pods.go:61] "kube-apiserver-cert-expiration-341199" [08d86978-dd6f-4c39-8314-95b55f13bb39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:12:26.467462  719703 system_pods.go:61] "kube-controller-manager-cert-expiration-341199" [e0b8bebe-ea95-4483-9977-727af93308d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:12:26.467466  719703 system_pods.go:61] "kube-scheduler-cert-expiration-341199" [93791ab6-6597-45d3-99b4-fb2409f7fb9a] Running
	I1014 21:12:26.467473  719703 system_pods.go:61] "storage-provisioner" [8352cf30-9ff2-4ea7-bd34-eb96aa21beb6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:12:26.467477  719703 system_pods.go:74] duration metric: took 3.014253ms to wait for pod list to return data ...
	I1014 21:12:26.467488  719703 kubeadm.go:586] duration metric: took 962.738215ms to wait for: map[apiserver:true system_pods:true]
	I1014 21:12:26.467499  719703 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:12:26.469858  719703 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:12:26.469884  719703 node_conditions.go:123] node cpu capacity is 2
	I1014 21:12:26.469894  719703 node_conditions.go:105] duration metric: took 2.39095ms to run NodePressure ...
	I1014 21:12:26.469904  719703 start.go:241] waiting for startup goroutines ...
	I1014 21:12:26.678497  719703 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-341199" context rescaled to 1 replicas
	I1014 21:12:26.678528  719703 start.go:246] waiting for cluster config update ...
	I1014 21:12:26.678539  719703 start.go:255] writing updated cluster config ...
	I1014 21:12:26.678836  719703 ssh_runner.go:195] Run: rm -f paused
	I1014 21:12:26.751129  719703 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:12:26.756546  719703 out.go:179] * Done! kubectl is now configured to use "cert-expiration-341199" cluster and "default" namespace by default
	I1014 21:12:52.161143  715133 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	I1014 21:12:52.161260  715133 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	I1014 21:12:52.161372  715133 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	I1014 21:12:52.161383  715133 kubeadm.go:318] 
	I1014 21:12:52.161482  715133 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 21:12:52.161580  715133 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 21:12:52.161674  715133 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 21:12:52.161771  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 21:12:52.161848  715133 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 21:12:52.161945  715133 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 21:12:52.161951  715133 kubeadm.go:318] 
	I1014 21:12:52.166087  715133 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:12:52.166871  715133 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:12:52.167068  715133 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:12:52.168632  715133 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 21:12:52.168793  715133 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 21:12:52.168872  715133 kubeadm.go:402] duration metric: took 8m13.793238745s to StartCluster
	I1014 21:12:52.168943  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:12:52.169038  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:12:52.193837  715133 cri.go:89] found id: ""
	I1014 21:12:52.193872  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.193881  715133 logs.go:284] No container was found matching "kube-apiserver"
	I1014 21:12:52.193888  715133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:12:52.193946  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:12:52.224652  715133 cri.go:89] found id: ""
	I1014 21:12:52.224677  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.224686  715133 logs.go:284] No container was found matching "etcd"
	I1014 21:12:52.224692  715133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:12:52.224755  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:12:52.255383  715133 cri.go:89] found id: ""
	I1014 21:12:52.255408  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.255417  715133 logs.go:284] No container was found matching "coredns"
	I1014 21:12:52.255424  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:12:52.255495  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:12:52.284182  715133 cri.go:89] found id: ""
	I1014 21:12:52.284208  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.284216  715133 logs.go:284] No container was found matching "kube-scheduler"
	I1014 21:12:52.284222  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:12:52.284279  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:12:52.309744  715133 cri.go:89] found id: ""
	I1014 21:12:52.309768  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.309777  715133 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:12:52.309783  715133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:12:52.309858  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:12:52.346466  715133 cri.go:89] found id: ""
	I1014 21:12:52.346491  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.346500  715133 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 21:12:52.346507  715133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:12:52.346563  715133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:12:52.373318  715133 cri.go:89] found id: ""
	I1014 21:12:52.373352  715133 logs.go:282] 0 containers: []
	W1014 21:12:52.373361  715133 logs.go:284] No container was found matching "kindnet"
	I1014 21:12:52.373388  715133 logs.go:123] Gathering logs for kubelet ...
	I1014 21:12:52.373405  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:12:52.465758  715133 logs.go:123] Gathering logs for dmesg ...
	I1014 21:12:52.465793  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:12:52.485569  715133 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:12:52.485600  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:12:52.556813  715133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:12:52.548216    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.548812    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550302    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550864    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.552616    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 21:12:52.548216    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.548812    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550302    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.550864    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:52.552616    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:12:52.556837  715133 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:12:52.556851  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:12:52.632570  715133 logs.go:123] Gathering logs for container status ...
	I1014 21:12:52.632650  715133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 21:12:52.663854  715133 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 21:12:52.663908  715133 out.go:285] * 
	W1014 21:12:52.663968  715133 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:12:52.663986  715133 out.go:285] * 
	W1014 21:12:52.666155  715133 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:12:52.672877  715133 out.go:203] 
	W1014 21:12:52.676723  715133 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50149622s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000366625s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000352503s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000154724s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 21:12:52.676747  715133 out.go:285] * 
	I1014 21:12:52.681768  715133 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 21:12:46 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:46.944960782Z" level=info msg="createCtr: removing container ee574231844aec9fefa6ceec9471c56ab0956de36621eb45d5e82bbb52a152a7" id=7fddd919-a724-43ae-ae4d-1d153e5e496b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:46 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:46.94499764Z" level=info msg="createCtr: deleting container ee574231844aec9fefa6ceec9471c56ab0956de36621eb45d5e82bbb52a152a7 from storage" id=7fddd919-a724-43ae-ae4d-1d153e5e496b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:46 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:46.947701014Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-347955_kube-system_7f729f7a781c73fae44f78581c500232_0" id=7fddd919-a724-43ae-ae4d-1d153e5e496b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.925208509Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4c0ea978-c3fe-4846-9c0b-ed43474f4aee name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.92879745Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2e889832-1d89-4593-8ea8-84ad3136e941 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.929743514Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-347955/etcd" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.930072536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.934391358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.934969696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.946403334Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.947958162Z" level=info msg="createCtr: deleting container ID bd7aa5d4f6e12fc68964f4a9a69a7813b5f61f7bfce24b53ccdfc367abe64aa3 from idIndex" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.948000567Z" level=info msg="createCtr: removing container bd7aa5d4f6e12fc68964f4a9a69a7813b5f61f7bfce24b53ccdfc367abe64aa3" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.948035406Z" level=info msg="createCtr: deleting container bd7aa5d4f6e12fc68964f4a9a69a7813b5f61f7bfce24b53ccdfc367abe64aa3 from storage" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:47 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:47.950558486Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-347955_kube-system_a56677476469ae184725e85f7aedbea9_0" id=a9a040c1-46ca-4328-adf8-82ffd4bae957 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.925268993Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=db6876c3-066c-4ff6-856a-5f59b9f4ebb5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.92632349Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2c8bfdfa-8875-4099-9a96-fee9600dce4b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.927459457Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-env-347955/kube-scheduler" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.927734103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.932127838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.93272303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.943786562Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.945058087Z" level=info msg="createCtr: deleting container ID 1459b36296a7091cd95130d8ba5b528cfa2f057f544f6494cbee19d45fac7978 from idIndex" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.945111676Z" level=info msg="createCtr: removing container 1459b36296a7091cd95130d8ba5b528cfa2f057f544f6494cbee19d45fac7978" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.945147607Z" level=info msg="createCtr: deleting container 1459b36296a7091cd95130d8ba5b528cfa2f057f544f6494cbee19d45fac7978 from storage" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:12:51 force-systemd-env-347955 crio[837]: time="2025-10-14T21:12:51.95038608Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-env-347955_kube-system_f65ec20b9d3adedfb778b4a3d533d480_0" id=1ceb850e-57db-40fa-9e9a-57c730c6396e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 21:12:53.739655    2485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:53.740427    2485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:53.742006    2485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:53.742326    2485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 21:12:53.743935    2485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.850661] overlayfs: idmapped layers are currently not supported
	[Oct14 20:35] overlayfs: idmapped layers are currently not supported
	[Oct14 20:36] overlayfs: idmapped layers are currently not supported
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:12:53 up  3:55,  0 user,  load average: 1.64, 1.07, 1.48
	Linux force-systemd-env-347955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 21:12:46 force-systemd-env-347955 kubelet[1782]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-347955_kube-system(7f729f7a781c73fae44f78581c500232): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:12:46 force-systemd-env-347955 kubelet[1782]:  > logger="UnhandledError"
	Oct 14 21:12:46 force-systemd-env-347955 kubelet[1782]: E1014 21:12:46.948381    1782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-347955" podUID="7f729f7a781c73fae44f78581c500232"
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]: E1014 21:12:47.924821    1782 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-347955\" not found" node="force-systemd-env-347955"
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]: E1014 21:12:47.951149    1782 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]:  > podSandboxID="3a709aa1e64402dbdc86704e2b66c2b113a6b146f50adfdad63753230c048b43"
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]: E1014 21:12:47.951246    1782 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]:         container etcd start failed in pod etcd-force-systemd-env-347955_kube-system(a56677476469ae184725e85f7aedbea9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]:  > logger="UnhandledError"
	Oct 14 21:12:47 force-systemd-env-347955 kubelet[1782]: E1014 21:12:47.951275    1782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-347955" podUID="a56677476469ae184725e85f7aedbea9"
	Oct 14 21:12:48 force-systemd-env-347955 kubelet[1782]: E1014 21:12:48.557966    1782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-347955?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 14 21:12:48 force-systemd-env-347955 kubelet[1782]: I1014 21:12:48.755565    1782 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-347955"
	Oct 14 21:12:48 force-systemd-env-347955 kubelet[1782]: E1014 21:12:48.755931    1782 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-347955"
	Oct 14 21:12:49 force-systemd-env-347955 kubelet[1782]: E1014 21:12:49.535787    1782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-347955.186e77b6f0debc3f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-347955,UID:force-systemd-env-347955,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-347955 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-347955,},FirstTimestamp:2025-10-14 21:08:51.957324863 +0000 UTC m=+1.301094703,LastTimestamp:2025-10-14 21:08:51.957324863 +0000 UTC m=+1.301094703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-347955,}"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.124131    1782 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.85.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.924827    1782 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-347955\" not found" node="force-systemd-env-347955"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.950733    1782 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]:  > podSandboxID="49b4d2cb0a71d0fed959aa9277293b4eb007e4cc95450dd62990ae4aa3cc008b"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.950853    1782 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-env-347955_kube-system(f65ec20b9d3adedfb778b4a3d533d480): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]:  > logger="UnhandledError"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.950934    1782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-env-347955" podUID="f65ec20b9d3adedfb778b4a3d533d480"
	Oct 14 21:12:51 force-systemd-env-347955 kubelet[1782]: E1014 21:12:51.986161    1782 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-347955\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-347955 -n force-systemd-env-347955
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-347955 -n force-systemd-env-347955: exit status 6 (337.458688ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 21:12:54.192224  722475 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-347955" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-347955" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-347955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-347955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-347955: (1.929393991s)
--- FAIL: TestForceSystemdEnv (512.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-276405 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-276405 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hl2sc" [a1eb48f4-3b32-46db-bae4-dfbd2182ab3b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-276405 -n functional-276405
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-14 20:18:52.799916703 +0000 UTC m=+1290.218070442
functional_test.go:1645: (dbg) Run:  kubectl --context functional-276405 describe po hello-node-connect-7d85dfc575-hl2sc -n default
functional_test.go:1645: (dbg) kubectl --context functional-276405 describe po hello-node-connect-7d85dfc575-hl2sc -n default:
Name:             hello-node-connect-7d85dfc575-hl2sc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-276405/192.168.49.2
Start Time:       Tue, 14 Oct 2025 20:08:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqbqm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rqbqm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hl2sc to functional-276405
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-276405 logs hello-node-connect-7d85dfc575-hl2sc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-276405 logs hello-node-connect-7d85dfc575-hl2sc -n default: exit status 1 (99.099627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hl2sc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-276405 logs hello-node-connect-7d85dfc575-hl2sc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-276405 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hl2sc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-276405/192.168.49.2
Start Time:       Tue, 14 Oct 2025 20:08:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqbqm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rqbqm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hl2sc to functional-276405
Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-276405 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-276405 logs -l app=hello-node-connect: exit status 1 (86.644007ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hl2sc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-276405 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-276405 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.92.244
IPs:                      10.109.92.244
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32479/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-276405
helpers_test.go:243: (dbg) docker inspect functional-276405:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8",
	        "Created": "2025-10-14T20:06:04.012912668Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 563144,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:06:04.08243863Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8/hosts",
	        "LogPath": "/var/lib/docker/containers/a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8/a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8-json.log",
	        "Name": "/functional-276405",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-276405:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-276405",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a895941f16311a29dfd6ceba0e59525bd06a226bfb6802ed9ccbeadc3c6a98d8",
	                "LowerDir": "/var/lib/docker/overlay2/e70c8642879084da6994ecdb1419bccaaf5afff5baa49db27d370068a077e54b-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e70c8642879084da6994ecdb1419bccaaf5afff5baa49db27d370068a077e54b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e70c8642879084da6994ecdb1419bccaaf5afff5baa49db27d370068a077e54b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e70c8642879084da6994ecdb1419bccaaf5afff5baa49db27d370068a077e54b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-276405",
	                "Source": "/var/lib/docker/volumes/functional-276405/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-276405",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-276405",
	                "name.minikube.sigs.k8s.io": "functional-276405",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6838f33cc7d12be3dd87317bdf474b39d7ead9647467c887b6be4258b5b11e3",
	            "SandboxKey": "/var/run/docker/netns/d6838f33cc7d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-276405": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:59:e0:20:c3:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02bcfd061d7b068537c3ad8e89ef96c00bc1ce2d037053c5e38ff0cc73f23d26",
	                    "EndpointID": "14395f6ae29c3f9056b9a09a6b60fa9370b9abd3b8a337d01275e5b676c568a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-276405",
	                        "a895941f1631"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-276405 -n functional-276405
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 logs -n 25: (1.440260268s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-276405 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	│ kubectl │ functional-276405 kubectl -- --context functional-276405 get pods                                                         │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	│ start   │ -p functional-276405 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:08 UTC │
	│ service │ invalid-svc -p functional-276405                                                                                          │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ cp      │ functional-276405 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ config  │ functional-276405 config unset cpus                                                                                       │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ config  │ functional-276405 config get cpus                                                                                         │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ config  │ functional-276405 config set cpus 2                                                                                       │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ config  │ functional-276405 config get cpus                                                                                         │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ config  │ functional-276405 config unset cpus                                                                                       │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ ssh     │ functional-276405 ssh -n functional-276405 sudo cat /home/docker/cp-test.txt                                              │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ config  │ functional-276405 config get cpus                                                                                         │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ ssh     │ functional-276405 ssh echo hello                                                                                          │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ cp      │ functional-276405 cp functional-276405:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd614882265/001/cp-test.txt │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ ssh     │ functional-276405 ssh cat /etc/hostname                                                                                   │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ ssh     │ functional-276405 ssh -n functional-276405 sudo cat /home/docker/cp-test.txt                                              │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ tunnel  │ functional-276405 tunnel --alsologtostderr                                                                                │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ tunnel  │ functional-276405 tunnel --alsologtostderr                                                                                │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ cp      │ functional-276405 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ tunnel  │ functional-276405 tunnel --alsologtostderr                                                                                │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │                     │
	│ ssh     │ functional-276405 ssh -n functional-276405 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ addons  │ functional-276405 addons list                                                                                             │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	│ addons  │ functional-276405 addons list -o json                                                                                     │ functional-276405 │ jenkins │ v1.37.0 │ 14 Oct 25 20:08 UTC │ 14 Oct 25 20:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:07:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:07:55.216592  567321 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:07:55.216733  567321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:55.216737  567321 out.go:374] Setting ErrFile to fd 2...
	I1014 20:07:55.216741  567321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:55.216985  567321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:07:55.217356  567321 out.go:368] Setting JSON to false
	I1014 20:07:55.218346  567321 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10226,"bootTime":1760462250,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:07:55.218405  567321 start.go:141] virtualization:  
	I1014 20:07:55.221950  567321 out.go:179] * [functional-276405] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:07:55.225027  567321 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:07:55.225147  567321 notify.go:220] Checking for updates...
	I1014 20:07:55.230817  567321 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:07:55.233626  567321 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:07:55.236485  567321 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:07:55.239324  567321 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:07:55.242209  567321 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:07:55.245552  567321 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:55.245654  567321 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:07:55.272911  567321 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:07:55.273034  567321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:07:55.329718  567321 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-14 20:07:55.320181574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:07:55.329814  567321 docker.go:318] overlay module found
	I1014 20:07:55.332881  567321 out.go:179] * Using the docker driver based on existing profile
	I1014 20:07:55.335787  567321 start.go:305] selected driver: docker
	I1014 20:07:55.335795  567321 start.go:925] validating driver "docker" against &{Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:55.335895  567321 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:07:55.336000  567321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:07:55.391332  567321 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-14 20:07:55.382626418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:07:55.391711  567321 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:07:55.391740  567321 cni.go:84] Creating CNI manager for ""
	I1014 20:07:55.391797  567321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 20:07:55.391838  567321 start.go:349] cluster config:
	{Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:55.395160  567321 out.go:179] * Starting "functional-276405" primary control-plane node in "functional-276405" cluster
	I1014 20:07:55.398446  567321 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:07:55.401424  567321 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:07:55.404469  567321 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:07:55.404530  567321 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 20:07:55.404538  567321 cache.go:58] Caching tarball of preloaded images
	I1014 20:07:55.404567  567321 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:07:55.404623  567321 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:07:55.404632  567321 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:07:55.404744  567321 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/config.json ...
	I1014 20:07:55.424071  567321 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:07:55.424082  567321 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:07:55.424098  567321 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:07:55.424119  567321 start.go:360] acquireMachinesLock for functional-276405: {Name:mke2c9b016b0710caa076a2066d45b27d6f31cd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:07:55.424186  567321 start.go:364] duration metric: took 49.896µs to acquireMachinesLock for "functional-276405"
	I1014 20:07:55.424210  567321 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:07:55.424249  567321 fix.go:54] fixHost starting: 
	I1014 20:07:55.424564  567321 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
	I1014 20:07:55.442434  567321 fix.go:112] recreateIfNeeded on functional-276405: state=Running err=<nil>
	W1014 20:07:55.442454  567321 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:07:55.445676  567321 out.go:252] * Updating the running docker "functional-276405" container ...
	I1014 20:07:55.445699  567321 machine.go:93] provisionDockerMachine start ...
	I1014 20:07:55.445778  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:07:55.463097  567321 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:55.463431  567321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33517 <nil> <nil>}
	I1014 20:07:55.463439  567321 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:07:55.616656  567321 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-276405
	
	I1014 20:07:55.616670  567321 ubuntu.go:182] provisioning hostname "functional-276405"
	I1014 20:07:55.616749  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:07:55.637505  567321 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:55.637801  567321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33517 <nil> <nil>}
	I1014 20:07:55.637809  567321 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-276405 && echo "functional-276405" | sudo tee /etc/hostname
	I1014 20:07:55.789808  567321 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-276405
	
	I1014 20:07:55.789874  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:07:55.808714  567321 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:55.809002  567321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33517 <nil> <nil>}
	I1014 20:07:55.809016  567321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-276405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-276405/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-276405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:07:55.952678  567321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:07:55.952694  567321 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:07:55.952711  567321 ubuntu.go:190] setting up certificates
	I1014 20:07:55.952720  567321 provision.go:84] configureAuth start
	I1014 20:07:55.952778  567321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-276405
	I1014 20:07:55.971106  567321 provision.go:143] copyHostCerts
	I1014 20:07:55.971165  567321 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:07:55.971183  567321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:07:55.971256  567321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:07:55.971366  567321 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:07:55.971370  567321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:07:55.971395  567321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:07:55.971454  567321 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:07:55.971458  567321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:07:55.971481  567321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:07:55.971535  567321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.functional-276405 san=[127.0.0.1 192.168.49.2 functional-276405 localhost minikube]
	I1014 20:07:56.369252  567321 provision.go:177] copyRemoteCerts
	I1014 20:07:56.369303  567321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:07:56.369339  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:07:56.387672  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:07:56.488394  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:07:56.512306  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 20:07:56.529944  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:07:56.548058  567321 provision.go:87] duration metric: took 595.324808ms to configureAuth
	I1014 20:07:56.548075  567321 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:07:56.548284  567321 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:56.548490  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:07:56.565876  567321 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:56.566183  567321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33517 <nil> <nil>}
	I1014 20:07:56.566205  567321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:08:01.954214  567321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:08:01.954226  567321 machine.go:96] duration metric: took 6.508521438s to provisionDockerMachine
	I1014 20:08:01.954236  567321 start.go:293] postStartSetup for "functional-276405" (driver="docker")
	I1014 20:08:01.954246  567321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:08:01.954315  567321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:08:01.954364  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:01.971248  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:02.076432  567321 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:08:02.080017  567321 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:08:02.080036  567321 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:08:02.080046  567321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:08:02.080104  567321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:08:02.080184  567321 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:08:02.080259  567321 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/test/nested/copy/546910/hosts -> hosts in /etc/test/nested/copy/546910
	I1014 20:08:02.080301  567321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/546910
	I1014 20:08:02.088337  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:08:02.106252  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/test/nested/copy/546910/hosts --> /etc/test/nested/copy/546910/hosts (40 bytes)
	I1014 20:08:02.124153  567321 start.go:296] duration metric: took 169.901583ms for postStartSetup
	I1014 20:08:02.124235  567321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:08:02.124273  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:02.141422  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:02.243054  567321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:08:02.250170  567321 fix.go:56] duration metric: took 6.825945131s for fixHost
	I1014 20:08:02.250187  567321 start.go:83] releasing machines lock for "functional-276405", held for 6.825993182s
	I1014 20:08:02.250258  567321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-276405
	I1014 20:08:02.271998  567321 ssh_runner.go:195] Run: cat /version.json
	I1014 20:08:02.272039  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:02.274937  567321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:08:02.275014  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:02.303068  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:02.306333  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:02.404154  567321 ssh_runner.go:195] Run: systemctl --version
	I1014 20:08:02.500416  567321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:08:02.539595  567321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:08:02.544228  567321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:08:02.544288  567321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:08:02.552413  567321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:08:02.552427  567321 start.go:495] detecting cgroup driver to use...
	I1014 20:08:02.552458  567321 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:08:02.552504  567321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:08:02.568016  567321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:08:02.581638  567321 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:08:02.581705  567321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:08:02.597521  567321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:08:02.611295  567321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:08:02.754083  567321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:08:02.896168  567321 docker.go:234] disabling docker service ...
	I1014 20:08:02.896223  567321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:08:02.911400  567321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:08:02.924585  567321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:08:03.059498  567321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:08:03.202638  567321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:08:03.219172  567321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:08:03.235449  567321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:08:03.235503  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.246370  567321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:08:03.246427  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.255542  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.264625  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.273470  567321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:08:03.281594  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.290626  567321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.298879  567321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:08:03.307784  567321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:08:03.315314  567321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:08:03.322835  567321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:08:03.458091  567321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:08:08.668390  567321 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.210275343s)
	I1014 20:08:08.668405  567321 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:08:08.668455  567321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:08:08.672242  567321 start.go:563] Will wait 60s for crictl version
	I1014 20:08:08.672297  567321 ssh_runner.go:195] Run: which crictl
	I1014 20:08:08.675835  567321 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:08:08.700549  567321 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:08:08.700642  567321 ssh_runner.go:195] Run: crio --version
	I1014 20:08:08.729117  567321 ssh_runner.go:195] Run: crio --version
	I1014 20:08:08.762957  567321 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:08:08.766051  567321 cli_runner.go:164] Run: docker network inspect functional-276405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:08:08.783012  567321 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:08:08.790185  567321 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1014 20:08:08.793205  567321 kubeadm.go:883] updating cluster {Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:08:08.793328  567321 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:08:08.793404  567321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:08:08.835024  567321 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:08:08.835036  567321 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:08:08.835090  567321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:08:08.864734  567321 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:08:08.864747  567321 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:08:08.864752  567321 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 20:08:08.864878  567321 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-276405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:08:08.864955  567321 ssh_runner.go:195] Run: crio config
	I1014 20:08:08.925522  567321 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1014 20:08:08.925542  567321 cni.go:84] Creating CNI manager for ""
	I1014 20:08:08.925550  567321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 20:08:08.925570  567321 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:08:08.925592  567321 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-276405 NodeName:functional-276405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:08:08.925727  567321 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-276405"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:08:08.925803  567321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:08:08.933551  567321 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:08:08.933627  567321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:08:08.941085  567321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 20:08:08.953188  567321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:08:08.965693  567321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1014 20:08:08.978785  567321 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:08:08.983067  567321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:08:09.121393  567321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:08:09.135466  567321 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405 for IP: 192.168.49.2
	I1014 20:08:09.135476  567321 certs.go:195] generating shared ca certs ...
	I1014 20:08:09.135501  567321 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:08:09.135642  567321 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:08:09.135678  567321 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:08:09.135684  567321 certs.go:257] generating profile certs ...
	I1014 20:08:09.135763  567321 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.key
	I1014 20:08:09.135815  567321 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/apiserver.key.ebf7f04f
	I1014 20:08:09.135881  567321 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/proxy-client.key
	I1014 20:08:09.136000  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:08:09.136029  567321 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:08:09.136037  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:08:09.136060  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:08:09.136085  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:08:09.136106  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:08:09.136152  567321 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:08:09.136795  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:08:09.155634  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:08:09.173317  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:08:09.191423  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:08:09.209029  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:08:09.228717  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:08:09.247137  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:08:09.264613  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:08:09.281810  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:08:09.299730  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:08:09.316626  567321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:08:09.334306  567321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:08:09.346988  567321 ssh_runner.go:195] Run: openssl version
	I1014 20:08:09.353544  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:08:09.361748  567321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:08:09.365394  567321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:08:09.365446  567321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:08:09.408134  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:08:09.415938  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:08:09.424227  567321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:08:09.428090  567321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:08:09.428147  567321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:08:09.469307  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:08:09.477436  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:08:09.485506  567321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:08:09.489386  567321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:08:09.489442  567321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:08:09.530328  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:08:09.539278  567321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:08:09.543382  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:08:09.585220  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:08:09.627153  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:08:09.668169  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:08:09.709252  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:08:09.756766  567321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:08:09.799063  567321 kubeadm.go:400] StartCluster: {Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:08:09.799147  567321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:08:09.799215  567321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:08:09.829054  567321 cri.go:89] found id: "8d315b26954f11354d467686850d1b6b5c1045bdb0825c6b3695e7ac8aba9c97"
	I1014 20:08:09.829067  567321 cri.go:89] found id: "10040b6dcd76c63c89b284847cb296cdb0667b2d2a944bca97c8b15a9db34ff5"
	I1014 20:08:09.829070  567321 cri.go:89] found id: "56815432fe74330c4ba776195a49542f4dd95522b5d96f28a0ac52a1f4843fb3"
	I1014 20:08:09.829073  567321 cri.go:89] found id: "8b74fe836774bc9729df28f59e52a5dfaa3fdfd9d302ea018712346832b0eea1"
	I1014 20:08:09.829076  567321 cri.go:89] found id: "5d9c0d7ca4d482c93916e58add4d9102a988566f4f7a2cd1044eea3f91ccc2cb"
	I1014 20:08:09.829078  567321 cri.go:89] found id: "e2da0aed7e61278c61116dc46baa15ae194fb3e7abdee3d2feb9f9d2e8c5f092"
	I1014 20:08:09.829081  567321 cri.go:89] found id: "5db4d41a380b231d4d2176771a14c6842006496d53178e3f82300e5fe2ea7d26"
	I1014 20:08:09.829083  567321 cri.go:89] found id: "78f50d8ecb538b301eddcfd9a8de4f0fd694bc2b41dc8eb1b5f9b009c2d3ab49"
	I1014 20:08:09.829104  567321 cri.go:89] found id: ""
	I1014 20:08:09.829154  567321 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 20:08:09.839957  567321 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:08:09Z" level=error msg="open /run/runc: no such file or directory"
	I1014 20:08:09.840025  567321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:08:09.847790  567321 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:08:09.847800  567321 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:08:09.847847  567321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:08:09.864942  567321 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:08:09.865520  567321 kubeconfig.go:125] found "functional-276405" server: "https://192.168.49.2:8441"
	I1014 20:08:09.867270  567321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:08:09.875034  567321 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-14 20:06:13.203410718 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-14 20:08:08.973029521 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1014 20:08:09.875042  567321 kubeadm.go:1160] stopping kube-system containers ...
	I1014 20:08:09.875058  567321 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 20:08:09.875113  567321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:08:09.902393  567321 cri.go:89] found id: "8d315b26954f11354d467686850d1b6b5c1045bdb0825c6b3695e7ac8aba9c97"
	I1014 20:08:09.902404  567321 cri.go:89] found id: "10040b6dcd76c63c89b284847cb296cdb0667b2d2a944bca97c8b15a9db34ff5"
	I1014 20:08:09.902407  567321 cri.go:89] found id: "56815432fe74330c4ba776195a49542f4dd95522b5d96f28a0ac52a1f4843fb3"
	I1014 20:08:09.902410  567321 cri.go:89] found id: "8b74fe836774bc9729df28f59e52a5dfaa3fdfd9d302ea018712346832b0eea1"
	I1014 20:08:09.902413  567321 cri.go:89] found id: "5d9c0d7ca4d482c93916e58add4d9102a988566f4f7a2cd1044eea3f91ccc2cb"
	I1014 20:08:09.902416  567321 cri.go:89] found id: "e2da0aed7e61278c61116dc46baa15ae194fb3e7abdee3d2feb9f9d2e8c5f092"
	I1014 20:08:09.902418  567321 cri.go:89] found id: "5db4d41a380b231d4d2176771a14c6842006496d53178e3f82300e5fe2ea7d26"
	I1014 20:08:09.902420  567321 cri.go:89] found id: "78f50d8ecb538b301eddcfd9a8de4f0fd694bc2b41dc8eb1b5f9b009c2d3ab49"
	I1014 20:08:09.902423  567321 cri.go:89] found id: ""
	I1014 20:08:09.902428  567321 cri.go:252] Stopping containers: [8d315b26954f11354d467686850d1b6b5c1045bdb0825c6b3695e7ac8aba9c97 10040b6dcd76c63c89b284847cb296cdb0667b2d2a944bca97c8b15a9db34ff5 56815432fe74330c4ba776195a49542f4dd95522b5d96f28a0ac52a1f4843fb3 8b74fe836774bc9729df28f59e52a5dfaa3fdfd9d302ea018712346832b0eea1 5d9c0d7ca4d482c93916e58add4d9102a988566f4f7a2cd1044eea3f91ccc2cb e2da0aed7e61278c61116dc46baa15ae194fb3e7abdee3d2feb9f9d2e8c5f092 5db4d41a380b231d4d2176771a14c6842006496d53178e3f82300e5fe2ea7d26 78f50d8ecb538b301eddcfd9a8de4f0fd694bc2b41dc8eb1b5f9b009c2d3ab49]
	I1014 20:08:09.902478  567321 ssh_runner.go:195] Run: which crictl
	I1014 20:08:09.906287  567321 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 8d315b26954f11354d467686850d1b6b5c1045bdb0825c6b3695e7ac8aba9c97 10040b6dcd76c63c89b284847cb296cdb0667b2d2a944bca97c8b15a9db34ff5 56815432fe74330c4ba776195a49542f4dd95522b5d96f28a0ac52a1f4843fb3 8b74fe836774bc9729df28f59e52a5dfaa3fdfd9d302ea018712346832b0eea1 5d9c0d7ca4d482c93916e58add4d9102a988566f4f7a2cd1044eea3f91ccc2cb e2da0aed7e61278c61116dc46baa15ae194fb3e7abdee3d2feb9f9d2e8c5f092 5db4d41a380b231d4d2176771a14c6842006496d53178e3f82300e5fe2ea7d26 78f50d8ecb538b301eddcfd9a8de4f0fd694bc2b41dc8eb1b5f9b009c2d3ab49
	I1014 20:08:09.969622  567321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 20:08:10.079532  567321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:08:10.089519  567321 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 14 20:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 14 20:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 14 20:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 14 20:06 /etc/kubernetes/scheduler.conf
	
	I1014 20:08:10.089587  567321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 20:08:10.098394  567321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 20:08:10.106706  567321 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:08:10.106766  567321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:08:10.115000  567321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 20:08:10.123225  567321 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:08:10.123294  567321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:08:10.131040  567321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 20:08:10.138750  567321 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:08:10.138805  567321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:08:10.146289  567321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:08:10.154528  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:10.205753  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:12.809270  567321 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.603490877s)
	I1014 20:08:12.809354  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:13.029080  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:13.105632  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:13.181500  567321 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:08:13.181566  567321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:08:13.681800  567321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:08:14.182217  567321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:08:14.196774  567321 api_server.go:72] duration metric: took 1.015279466s to wait for apiserver process to appear ...
	I1014 20:08:14.196788  567321 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:08:14.196810  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:17.888419  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 20:08:17.888436  567321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 20:08:17.888448  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:18.000564  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 20:08:18.000604  567321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 20:08:18.196872  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:18.215104  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 20:08:18.215119  567321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 20:08:18.698080  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:18.710413  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 20:08:18.710431  567321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 20:08:19.196985  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:19.211723  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 20:08:19.211741  567321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 20:08:19.697346  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:19.705780  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1014 20:08:19.719912  567321 api_server.go:141] control plane version: v1.34.1
	I1014 20:08:19.719928  567321 api_server.go:131] duration metric: took 5.523134991s to wait for apiserver health ...
	I1014 20:08:19.719937  567321 cni.go:84] Creating CNI manager for ""
	I1014 20:08:19.719942  567321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 20:08:19.723136  567321 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 20:08:19.726016  567321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 20:08:19.731230  567321 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 20:08:19.731242  567321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 20:08:19.746083  567321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 20:08:20.374716  567321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:08:20.380499  567321 system_pods.go:59] 8 kube-system pods found
	I1014 20:08:20.380527  567321 system_pods.go:61] "coredns-66bc5c9577-xf9vd" [a53fdc40-0a7e-4f6b-93a8-5d1176083874] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:08:20.380534  567321 system_pods.go:61] "etcd-functional-276405" [b087c00f-5981-4504-aaa8-ffb5f1088dd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:08:20.380538  567321 system_pods.go:61] "kindnet-7zrq7" [d3c79e39-1b95-45cd-a6b8-f4cd155ee270] Running
	I1014 20:08:20.380544  567321 system_pods.go:61] "kube-apiserver-functional-276405" [ab6c0aba-9300-4ddf-86e8-620c9ea44f64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:08:20.380549  567321 system_pods.go:61] "kube-controller-manager-functional-276405" [9d562d39-9e59-4f31-9114-1dd0e3fbd79a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:08:20.380553  567321 system_pods.go:61] "kube-proxy-r27qj" [ff6082ea-a84b-41a9-928d-61883e949847] Running
	I1014 20:08:20.380567  567321 system_pods.go:61] "kube-scheduler-functional-276405" [1baaa6c4-a76d-4a49-ab12-05ee60330b1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:08:20.380572  567321 system_pods.go:61] "storage-provisioner" [c6b514bf-1462-4371-838d-7da460e2a035] Running
	I1014 20:08:20.380581  567321 system_pods.go:74] duration metric: took 5.851911ms to wait for pod list to return data ...
	I1014 20:08:20.380588  567321 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:08:20.383306  567321 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:08:20.383326  567321 node_conditions.go:123] node cpu capacity is 2
	I1014 20:08:20.383337  567321 node_conditions.go:105] duration metric: took 2.745004ms to run NodePressure ...
	I1014 20:08:20.383411  567321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:08:20.648528  567321 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1014 20:08:20.656411  567321 kubeadm.go:743] kubelet initialised
	I1014 20:08:20.656422  567321 kubeadm.go:744] duration metric: took 7.881503ms waiting for restarted kubelet to initialise ...
	I1014 20:08:20.656436  567321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:08:20.666173  567321 ops.go:34] apiserver oom_adj: -16
	I1014 20:08:20.666185  567321 kubeadm.go:601] duration metric: took 10.81837985s to restartPrimaryControlPlane
	I1014 20:08:20.666193  567321 kubeadm.go:402] duration metric: took 10.867140671s to StartCluster
	I1014 20:08:20.666210  567321 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:08:20.666283  567321 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:08:20.666995  567321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:08:20.667241  567321 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:08:20.667540  567321 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:08:20.667556  567321 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:08:20.667694  567321 addons.go:69] Setting storage-provisioner=true in profile "functional-276405"
	I1014 20:08:20.667708  567321 addons.go:238] Setting addon storage-provisioner=true in "functional-276405"
	I1014 20:08:20.667709  567321 addons.go:69] Setting default-storageclass=true in profile "functional-276405"
	W1014 20:08:20.667714  567321 addons.go:247] addon storage-provisioner should already be in state true
	I1014 20:08:20.667721  567321 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-276405"
	I1014 20:08:20.667736  567321 host.go:66] Checking if "functional-276405" exists ...
	I1014 20:08:20.668075  567321 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
	I1014 20:08:20.668189  567321 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
	I1014 20:08:20.672362  567321 out.go:179] * Verifying Kubernetes components...
	I1014 20:08:20.675371  567321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:08:20.698414  567321 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:08:20.701397  567321 addons.go:238] Setting addon default-storageclass=true in "functional-276405"
	W1014 20:08:20.701408  567321 addons.go:247] addon default-storageclass should already be in state true
	I1014 20:08:20.701430  567321 host.go:66] Checking if "functional-276405" exists ...
	I1014 20:08:20.704506  567321 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:08:20.704516  567321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:08:20.704573  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:20.704875  567321 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
	I1014 20:08:20.736064  567321 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:08:20.736076  567321 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:08:20.736142  567321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:08:20.761785  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:20.765221  567321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:08:20.920803  567321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:08:20.964596  567321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:08:21.011066  567321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:08:22.011558  567321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090729694s)
	I1014 20:08:22.011667  567321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.04705204s)
	I1014 20:08:22.011680  567321 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.000601564s)
	I1014 20:08:22.011700  567321 node_ready.go:35] waiting up to 6m0s for node "functional-276405" to be "Ready" ...
	I1014 20:08:22.015424  567321 node_ready.go:49] node "functional-276405" is "Ready"
	I1014 20:08:22.015453  567321 node_ready.go:38] duration metric: took 3.736567ms for node "functional-276405" to be "Ready" ...
	I1014 20:08:22.015466  567321 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:08:22.015564  567321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:08:22.023305  567321 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 20:08:22.026334  567321 addons.go:514] duration metric: took 1.358755565s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 20:08:22.035599  567321 api_server.go:72] duration metric: took 1.36831959s to wait for apiserver process to appear ...
	I1014 20:08:22.035613  567321 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:08:22.035642  567321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1014 20:08:22.046740  567321 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1014 20:08:22.047855  567321 api_server.go:141] control plane version: v1.34.1
	I1014 20:08:22.047869  567321 api_server.go:131] duration metric: took 12.250329ms to wait for apiserver health ...
	I1014 20:08:22.047877  567321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:08:22.052115  567321 system_pods.go:59] 8 kube-system pods found
	I1014 20:08:22.052135  567321 system_pods.go:61] "coredns-66bc5c9577-xf9vd" [a53fdc40-0a7e-4f6b-93a8-5d1176083874] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:08:22.052143  567321 system_pods.go:61] "etcd-functional-276405" [b087c00f-5981-4504-aaa8-ffb5f1088dd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:08:22.052148  567321 system_pods.go:61] "kindnet-7zrq7" [d3c79e39-1b95-45cd-a6b8-f4cd155ee270] Running
	I1014 20:08:22.052154  567321 system_pods.go:61] "kube-apiserver-functional-276405" [ab6c0aba-9300-4ddf-86e8-620c9ea44f64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:08:22.052159  567321 system_pods.go:61] "kube-controller-manager-functional-276405" [9d562d39-9e59-4f31-9114-1dd0e3fbd79a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:08:22.052164  567321 system_pods.go:61] "kube-proxy-r27qj" [ff6082ea-a84b-41a9-928d-61883e949847] Running
	I1014 20:08:22.052169  567321 system_pods.go:61] "kube-scheduler-functional-276405" [1baaa6c4-a76d-4a49-ab12-05ee60330b1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:08:22.052173  567321 system_pods.go:61] "storage-provisioner" [c6b514bf-1462-4371-838d-7da460e2a035] Running
	I1014 20:08:22.052177  567321 system_pods.go:74] duration metric: took 4.29566ms to wait for pod list to return data ...
	I1014 20:08:22.052184  567321 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:08:22.055020  567321 default_sa.go:45] found service account: "default"
	I1014 20:08:22.055032  567321 default_sa.go:55] duration metric: took 2.844362ms for default service account to be created ...
	I1014 20:08:22.055040  567321 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:08:22.057912  567321 system_pods.go:86] 8 kube-system pods found
	I1014 20:08:22.057929  567321 system_pods.go:89] "coredns-66bc5c9577-xf9vd" [a53fdc40-0a7e-4f6b-93a8-5d1176083874] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:08:22.057949  567321 system_pods.go:89] "etcd-functional-276405" [b087c00f-5981-4504-aaa8-ffb5f1088dd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:08:22.057954  567321 system_pods.go:89] "kindnet-7zrq7" [d3c79e39-1b95-45cd-a6b8-f4cd155ee270] Running
	I1014 20:08:22.057961  567321 system_pods.go:89] "kube-apiserver-functional-276405" [ab6c0aba-9300-4ddf-86e8-620c9ea44f64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:08:22.057968  567321 system_pods.go:89] "kube-controller-manager-functional-276405" [9d562d39-9e59-4f31-9114-1dd0e3fbd79a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:08:22.057971  567321 system_pods.go:89] "kube-proxy-r27qj" [ff6082ea-a84b-41a9-928d-61883e949847] Running
	I1014 20:08:22.057976  567321 system_pods.go:89] "kube-scheduler-functional-276405" [1baaa6c4-a76d-4a49-ab12-05ee60330b1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:08:22.057980  567321 system_pods.go:89] "storage-provisioner" [c6b514bf-1462-4371-838d-7da460e2a035] Running
	I1014 20:08:22.057986  567321 system_pods.go:126] duration metric: took 2.941193ms to wait for k8s-apps to be running ...
	I1014 20:08:22.057992  567321 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:08:22.058052  567321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:08:22.071666  567321 system_svc.go:56] duration metric: took 13.658869ms WaitForService to wait for kubelet
	I1014 20:08:22.071685  567321 kubeadm.go:586] duration metric: took 1.404422448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:08:22.071709  567321 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:08:22.075109  567321 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:08:22.075126  567321 node_conditions.go:123] node cpu capacity is 2
	I1014 20:08:22.075138  567321 node_conditions.go:105] duration metric: took 3.421351ms to run NodePressure ...
	I1014 20:08:22.075149  567321 start.go:241] waiting for startup goroutines ...
	I1014 20:08:22.075156  567321 start.go:246] waiting for cluster config update ...
	I1014 20:08:22.075167  567321 start.go:255] writing updated cluster config ...
	I1014 20:08:22.075481  567321 ssh_runner.go:195] Run: rm -f paused
	I1014 20:08:22.079983  567321 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:08:22.083701  567321 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xf9vd" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:23.090129  567321 pod_ready.go:94] pod "coredns-66bc5c9577-xf9vd" is "Ready"
	I1014 20:08:23.090144  567321 pod_ready.go:86] duration metric: took 1.006430691s for pod "coredns-66bc5c9577-xf9vd" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:23.093072  567321 pod_ready.go:83] waiting for pod "etcd-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:08:25.106262  567321 pod_ready.go:104] pod "etcd-functional-276405" is not "Ready", error: <nil>
	W1014 20:08:27.598301  567321 pod_ready.go:104] pod "etcd-functional-276405" is not "Ready", error: <nil>
	I1014 20:08:28.099232  567321 pod_ready.go:94] pod "etcd-functional-276405" is "Ready"
	I1014 20:08:28.099247  567321 pod_ready.go:86] duration metric: took 5.006162905s for pod "etcd-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.101740  567321 pod_ready.go:83] waiting for pod "kube-apiserver-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.106803  567321 pod_ready.go:94] pod "kube-apiserver-functional-276405" is "Ready"
	I1014 20:08:28.106818  567321 pod_ready.go:86] duration metric: took 5.064439ms for pod "kube-apiserver-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.109688  567321 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.114409  567321 pod_ready.go:94] pod "kube-controller-manager-functional-276405" is "Ready"
	I1014 20:08:28.114424  567321 pod_ready.go:86] duration metric: took 4.723821ms for pod "kube-controller-manager-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.117007  567321 pod_ready.go:83] waiting for pod "kube-proxy-r27qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.297720  567321 pod_ready.go:94] pod "kube-proxy-r27qj" is "Ready"
	I1014 20:08:28.297734  567321 pod_ready.go:86] duration metric: took 180.713751ms for pod "kube-proxy-r27qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:28.497155  567321 pod_ready.go:83] waiting for pod "kube-scheduler-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:08:30.502820  567321 pod_ready.go:104] pod "kube-scheduler-functional-276405" is not "Ready", error: <nil>
	I1014 20:08:31.503321  567321 pod_ready.go:94] pod "kube-scheduler-functional-276405" is "Ready"
	I1014 20:08:31.503336  567321 pod_ready.go:86] duration metric: took 3.006166854s for pod "kube-scheduler-functional-276405" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:08:31.503346  567321 pod_ready.go:40] duration metric: took 9.423342119s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:08:31.566777  567321 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 20:08:31.569797  567321 out.go:179] * Done! kubectl is now configured to use "functional-276405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:09:08 functional-276405 crio[3545]: time="2025-10-14T20:09:08.182620194Z" level=info msg="Checking pod default_hello-node-75c85bcc94-f57x4 for CNI network kindnet (type=ptp)"
	Oct 14 20:09:08 functional-276405 crio[3545]: time="2025-10-14T20:09:08.186444382Z" level=info msg="Ran pod sandbox 0439fa3cb5cb3583751b0702f41212c46e045ce3ceb56d8a54781a386fc2a7e4 with infra container: default/hello-node-75c85bcc94-f57x4/POD" id=0a1dcde3-1b1e-4007-8eb1-785248e8d10b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 20:09:08 functional-276405 crio[3545]: time="2025-10-14T20:09:08.190775634Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ba82667a-cd6c-41e2-8b52-970e9bc4b894 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:09:08 functional-276405 crio[3545]: time="2025-10-14T20:09:08.202077455Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f4410835-cb29-4733-bbee-edf69eaaf614 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.160180644Z" level=info msg="Stopping pod sandbox: 616abe4cd5e9b05c96d1f03a1d4074d222f4a3655547d9cc8e148caa769ac30e" id=bbe313ee-1091-49ee-bb8f-77d90fb8719d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.160237023Z" level=info msg="Stopped pod sandbox (already stopped): 616abe4cd5e9b05c96d1f03a1d4074d222f4a3655547d9cc8e148caa769ac30e" id=bbe313ee-1091-49ee-bb8f-77d90fb8719d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.160876478Z" level=info msg="Removing pod sandbox: 616abe4cd5e9b05c96d1f03a1d4074d222f4a3655547d9cc8e148caa769ac30e" id=718ed365-551b-4aaa-a04d-dca9a3f7997c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.164390605Z" level=info msg="Removed pod sandbox: 616abe4cd5e9b05c96d1f03a1d4074d222f4a3655547d9cc8e148caa769ac30e" id=718ed365-551b-4aaa-a04d-dca9a3f7997c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.165021642Z" level=info msg="Stopping pod sandbox: c66920dae7c4ccd75d6e28a1ae1f2ce593d73f9f51187fd54f04b66bcbef5e00" id=e2cb60b8-94e2-483b-9400-dffe1d603129 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.165068043Z" level=info msg="Stopped pod sandbox (already stopped): c66920dae7c4ccd75d6e28a1ae1f2ce593d73f9f51187fd54f04b66bcbef5e00" id=e2cb60b8-94e2-483b-9400-dffe1d603129 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.165397174Z" level=info msg="Removing pod sandbox: c66920dae7c4ccd75d6e28a1ae1f2ce593d73f9f51187fd54f04b66bcbef5e00" id=9e20fe53-3b5b-429e-9a3e-5fabf54b3aed name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.16885054Z" level=info msg="Removed pod sandbox: c66920dae7c4ccd75d6e28a1ae1f2ce593d73f9f51187fd54f04b66bcbef5e00" id=9e20fe53-3b5b-429e-9a3e-5fabf54b3aed name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.169534722Z" level=info msg="Stopping pod sandbox: 825e58db57b8cfe40dbe910d4b53b9058110292bec537b537b755fce694ffe01" id=8768da1f-1485-49a7-b540-860914a57093 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.169587277Z" level=info msg="Stopped pod sandbox (already stopped): 825e58db57b8cfe40dbe910d4b53b9058110292bec537b537b755fce694ffe01" id=8768da1f-1485-49a7-b540-860914a57093 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.169976701Z" level=info msg="Removing pod sandbox: 825e58db57b8cfe40dbe910d4b53b9058110292bec537b537b755fce694ffe01" id=a90318e1-4b06-4bf0-a5af-977d7d06e2d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:13 functional-276405 crio[3545]: time="2025-10-14T20:09:13.175184418Z" level=info msg="Removed pod sandbox: 825e58db57b8cfe40dbe910d4b53b9058110292bec537b537b755fce694ffe01" id=a90318e1-4b06-4bf0-a5af-977d7d06e2d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 14 20:09:23 functional-276405 crio[3545]: time="2025-10-14T20:09:23.204434263Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f6fbd536-ef9b-4dec-840f-1b7b53884c5a name=/runtime.v1.ImageService/PullImage
	Oct 14 20:09:34 functional-276405 crio[3545]: time="2025-10-14T20:09:34.202776106Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0fb25755-9aa7-4d54-98c4-ca821812e3a8 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:09:49 functional-276405 crio[3545]: time="2025-10-14T20:09:49.203294197Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cdb09522-5c20-4103-bff8-37ff581ff481 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:10:15 functional-276405 crio[3545]: time="2025-10-14T20:10:15.202143272Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=34d97940-3e33-4bb3-a516-86db87f74e97 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:10:42 functional-276405 crio[3545]: time="2025-10-14T20:10:42.202552532Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=680f5d40-5577-492c-8a22-b38f4b9c8972 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:11:42 functional-276405 crio[3545]: time="2025-10-14T20:11:42.203217997Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d973234c-883a-4760-9ce7-5eda7723b096 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:12:10 functional-276405 crio[3545]: time="2025-10-14T20:12:10.20259911Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1db4fe59-83c3-400b-8451-9163eb17c9ed name=/runtime.v1.ImageService/PullImage
	Oct 14 20:14:24 functional-276405 crio[3545]: time="2025-10-14T20:14:24.202391689Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bae9d65a-dabf-422f-b66b-f102fc6bcf8a name=/runtime.v1.ImageService/PullImage
	Oct 14 20:15:00 functional-276405 crio[3545]: time="2025-10-14T20:15:00.205102517Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f5c39909-7c3b-4f0e-b834-15d80f6aa432 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c9de067b8c82       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   ced4129d285c9       sp-pod                                      default
	c816ddf735d2a       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   5937a94af9a39       nginx-svc                                   default
	30c38d59f922b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   98b76e89de0d5       coredns-66bc5c9577-xf9vd                    kube-system
	e243e1aa88ddc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   bd9bc754d6599       storage-provisioner                         kube-system
	f7f00dbe95a88       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   b415c2b7e50d5       kube-proxy-r27qj                            kube-system
	f062d4934989a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   b11dd44693be4       kindnet-7zrq7                               kube-system
	727c50d3ede36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   a1f5bf779a9e3       kube-apiserver-functional-276405            kube-system
	b064c111e99b4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   9a58e1793b1df       kube-controller-manager-functional-276405   kube-system
	dff6670b9cf73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   8b2431ddd9734       kube-scheduler-functional-276405            kube-system
	93b57fec4ae89       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   69e4009759242       etcd-functional-276405                      kube-system
	8d315b26954f1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   9a58e1793b1df       kube-controller-manager-functional-276405   kube-system
	10040b6dcd76c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   69e4009759242       etcd-functional-276405                      kube-system
	56815432fe743       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   8b2431ddd9734       kube-scheduler-functional-276405            kube-system
	8b74fe836774b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   bd9bc754d6599       storage-provisioner                         kube-system
	e2da0aed7e612       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   b415c2b7e50d5       kube-proxy-r27qj                            kube-system
	5db4d41a380b2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   98b76e89de0d5       coredns-66bc5c9577-xf9vd                    kube-system
	78f50d8ecb538       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   b11dd44693be4       kindnet-7zrq7                               kube-system
	
	
	==> coredns [30c38d59f922be4c3c6820975a4ab12d2612446d95a1bc6aec040caff3787717] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39991 - 9116 "HINFO IN 7294089014386223472.6534384119796579303. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023215939s
	
	
	==> coredns [5db4d41a380b231d4d2176771a14c6842006496d53178e3f82300e5fe2ea7d26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55439 - 39288 "HINFO IN 6904896558082974520.3964330645682828436. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038972852s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-276405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-276405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=functional-276405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_06_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:06:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-276405
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:18:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:17:28 +0000   Tue, 14 Oct 2025 20:06:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:17:28 +0000   Tue, 14 Oct 2025 20:06:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:17:28 +0000   Tue, 14 Oct 2025 20:06:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:17:28 +0000   Tue, 14 Oct 2025 20:07:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-276405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5834e91c638b4e4f9c12b505aa5df104
	  System UUID:                b074881d-4dde-43b4-bdf6-fad33875ad98
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-f57x4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-hl2sc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-xf9vd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-276405                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-7zrq7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-276405             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-276405    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r27qj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-276405             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-276405 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-276405 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-276405 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-276405 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-276405 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-276405 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-276405 event: Registered Node functional-276405 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-276405 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-276405 event: Registered Node functional-276405 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-276405 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-276405 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-276405 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-276405 event: Registered Node functional-276405 in Controller
	
	
	==> dmesg <==
	[Oct14 19:30] overlayfs: idmapped layers are currently not supported
	[Oct14 19:32] overlayfs: idmapped layers are currently not supported
	[Oct14 19:38] overlayfs: idmapped layers are currently not supported
	[Oct14 19:41] overlayfs: idmapped layers are currently not supported
	[ +36.095229] overlayfs: idmapped layers are currently not supported
	[Oct14 19:42] overlayfs: idmapped layers are currently not supported
	[ +18.587469] overlayfs: idmapped layers are currently not supported
	[Oct14 19:44] overlayfs: idmapped layers are currently not supported
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	[Oct14 20:05] overlayfs: idmapped layers are currently not supported
	[Oct14 20:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [10040b6dcd76c63c89b284847cb296cdb0667b2d2a944bca97c8b15a9db34ff5] <==
	{"level":"warn","ts":"2025-10-14T20:07:32.000306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.024784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.060410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.100806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.120585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.142485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:07:32.197981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49868","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-14T20:07:56.738633Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-14T20:07:56.738676Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-276405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-14T20:07:56.738778Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T20:07:56.738830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T20:07:56.885615Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:07:56.885695Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-14T20:07:56.885769Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-14T20:07:56.885794Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-14T20:07:56.885851Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:07:56.885920Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:07:56.885966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-14T20:07:56.886051Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:07:56.886073Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:07:56.886083Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:07:56.889744Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-14T20:07:56.889841Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:07:56.889886Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-14T20:07:56.889893Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-276405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [93b57fec4ae899366bc9ded3e24db08fca1f65e3ab57d3fab29727cffe67e477] <==
	{"level":"warn","ts":"2025-10-14T20:08:16.472778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.489588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.507773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.548155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.554304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.569357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.589220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.612293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.637097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.642424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.677521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.698048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.714962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.730695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.751215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.769398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.785666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.805019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.875529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.881069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.920544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T20:08:16.986627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-14T20:18:15.640696Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2025-10-14T20:18:15.668724Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1126,"took":"27.682925ms","hash":2964035229,"current-db-size-bytes":3162112,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-14T20:18:15.668778Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2964035229,"revision":1126,"compact-revision":-1}
	
	
	==> kernel <==
	 20:18:54 up  3:01,  0 user,  load average: 0.55, 0.47, 1.28
	Linux functional-276405 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78f50d8ecb538b301eddcfd9a8de4f0fd694bc2b41dc8eb1b5f9b009c2d3ab49] <==
	I1014 20:07:28.183701       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 20:07:28.189099       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1014 20:07:28.189248       1 main.go:148] setting mtu 1500 for CNI 
	I1014 20:07:28.189261       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 20:07:28.189276       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T20:07:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 20:07:28.453393       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 20:07:28.453421       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 20:07:28.453430       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 20:07:28.453758       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 20:07:32.923078       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 20:07:32.923214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 20:07:32.923293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 20:07:32.923336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1014 20:07:34.453887       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 20:07:34.453916       1 metrics.go:72] Registering metrics
	I1014 20:07:34.453992       1 controller.go:711] "Syncing nftables rules"
	I1014 20:07:38.433642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:07:38.433698       1 main.go:301] handling current node
	I1014 20:07:48.433648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:07:48.433681       1 main.go:301] handling current node
	
	
	==> kindnet [f062d4934989a9c23e43e5851f65e849de8f952cb7b3c799ae50e22c18253d05] <==
	I1014 20:16:48.885652       1 main.go:301] handling current node
	I1014 20:16:58.886612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:16:58.886648       1 main.go:301] handling current node
	I1014 20:17:08.888630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:08.888668       1 main.go:301] handling current node
	I1014 20:17:18.881816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:18.881917       1 main.go:301] handling current node
	I1014 20:17:28.882039       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:28.882071       1 main.go:301] handling current node
	I1014 20:17:38.890217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:38.890252       1 main.go:301] handling current node
	I1014 20:17:48.881638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:48.881753       1 main.go:301] handling current node
	I1014 20:17:58.881853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:17:58.881905       1 main.go:301] handling current node
	I1014 20:18:08.888448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:18:08.888484       1 main.go:301] handling current node
	I1014 20:18:18.886221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:18:18.886331       1 main.go:301] handling current node
	I1014 20:18:28.881886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:18:28.882020       1 main.go:301] handling current node
	I1014 20:18:38.887951       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:18:38.887985       1 main.go:301] handling current node
	I1014 20:18:48.885727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:18:48.885835       1 main.go:301] handling current node
	
	
	==> kube-apiserver [727c50d3ede3644375c1a29803bb80f81fdf93037382f8f054bf32ce817394a2] <==
	I1014 20:08:18.195373       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 20:08:18.195405       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 20:08:18.195444       1 cache.go:39] Caches are synced for autoregister controller
	I1014 20:08:18.195047       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:08:18.212756       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:08:18.212947       1 policy_source.go:240] refreshing policies
	I1014 20:08:18.213613       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 20:08:18.235259       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:08:18.236108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:08:18.783924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 20:08:20.364750       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 20:08:20.517272       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:08:20.616381       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 20:08:20.625899       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 20:08:21.427563       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:08:21.487693       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 20:08:21.647611       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:08:34.934590       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.190.141"}
	I1014 20:08:41.779837       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.144.99"}
	I1014 20:08:52.456899       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.92.244"}
	E1014 20:08:59.453906       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:33894: use of closed network connection
	E1014 20:09:00.459149       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1014 20:09:07.748988       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35022: use of closed network connection
	I1014 20:09:07.946442       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.48.74"}
	I1014 20:18:18.134833       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8d315b26954f11354d467686850d1b6b5c1045bdb0825c6b3695e7ac8aba9c97] <==
	I1014 20:07:36.518556       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 20:07:36.518649       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 20:07:36.518619       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 20:07:36.520832       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:07:36.523448       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 20:07:36.527683       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 20:07:36.529975       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 20:07:36.543334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:07:36.544377       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 20:07:36.547872       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:07:36.547882       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 20:07:36.551197       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:07:36.553548       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:07:36.559442       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:07:36.559498       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:07:36.559642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:07:36.559680       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:07:36.559710       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:07:36.559994       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 20:07:36.562311       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 20:07:36.562333       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 20:07:36.565192       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 20:07:36.568599       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 20:07:36.569726       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 20:07:36.573120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [b064c111e99b4dd95afe99e94e17af1acdea4e70df9db2699deb76da72f43fe6] <==
	I1014 20:08:21.374774       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-276405"
	I1014 20:08:21.374818       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 20:08:21.377480       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:08:21.381346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:08:21.388692       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:08:21.389233       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 20:08:21.389590       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 20:08:21.389740       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 20:08:21.395990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:08:21.396018       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:08:21.396044       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:08:21.400417       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 20:08:21.405660       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 20:08:21.406363       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 20:08:21.406403       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 20:08:21.406422       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 20:08:21.406426       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 20:08:21.406431       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 20:08:21.416948       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:08:21.417323       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 20:08:21.428372       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 20:08:21.428452       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:08:21.428501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:08:21.429472       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:08:21.440413       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [e2da0aed7e61278c61116dc46baa15ae194fb3e7abdee3d2feb9f9d2e8c5f092] <==
	I1014 20:07:29.937436       1 server_linux.go:53] "Using iptables proxy"
	I1014 20:07:31.684667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1014 20:07:32.988509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-276405\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1014 20:07:34.299511       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:07:34.299550       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 20:07:34.299634       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:07:34.318088       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 20:07:34.318147       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:07:34.322472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:07:34.322794       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:07:34.322853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:34.325842       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:07:34.325912       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:07:34.326303       1 config.go:200] "Starting service config controller"
	I1014 20:07:34.326383       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:07:34.326902       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:07:34.328441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:07:34.328219       1 config.go:309] "Starting node config controller"
	I1014 20:07:34.328819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:07:34.328827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:07:34.426700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:07:34.426779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 20:07:34.428642       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f7f00dbe95a88141f32f423f7ce629f64dc77940dd88dead82cd28498e4e3485] <==
	I1014 20:08:18.646520       1 server_linux.go:53] "Using iptables proxy"
	I1014 20:08:18.762030       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:08:18.872541       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:08:18.872582       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 20:08:18.872646       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:08:19.276563       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 20:08:19.276619       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:08:19.325550       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:08:19.325868       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:08:19.325889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:08:19.347084       1 config.go:200] "Starting service config controller"
	I1014 20:08:19.347114       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:08:19.347614       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:08:19.347633       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:08:19.347679       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:08:19.347689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:08:19.356620       1 config.go:309] "Starting node config controller"
	I1014 20:08:19.356645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:08:19.356654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:08:19.447230       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:08:19.447747       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:08:19.447780       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [56815432fe74330c4ba776195a49542f4dd95522b5d96f28a0ac52a1f4843fb3] <==
	I1014 20:07:31.970833       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:07:34.078429       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:07:34.078544       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:34.086848       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:07:34.087024       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:07:34.087063       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:07:34.087323       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:07:34.088711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:34.088790       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:34.088878       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:34.088910       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:34.187542       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 20:07:34.189426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:34.189616       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:56.745574       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1014 20:07:56.745605       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1014 20:07:56.745628       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 20:07:56.745653       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:56.745676       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:56.745693       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1014 20:07:56.745981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1014 20:07:56.746016       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dff6670b9cf7375b94454578df698847008ec29c35f893dac5ff64859741e388] <==
	I1014 20:08:19.761280       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:08:21.794815       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:08:21.794858       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:08:21.800758       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:08:21.800844       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:08:21.800875       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:08:21.800909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:08:21.802929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:08:21.802956       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:08:21.802975       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:08:21.802987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:08:21.901007       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 20:08:21.903850       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:08:21.903863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:16:16 functional-276405 kubelet[3858]: E1014 20:16:16.202299    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:16:19 functional-276405 kubelet[3858]: E1014 20:16:19.202388    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:16:31 functional-276405 kubelet[3858]: E1014 20:16:31.202380    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:16:34 functional-276405 kubelet[3858]: E1014 20:16:34.202268    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:16:43 functional-276405 kubelet[3858]: E1014 20:16:43.203340    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:16:45 functional-276405 kubelet[3858]: E1014 20:16:45.202094    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:16:54 functional-276405 kubelet[3858]: E1014 20:16:54.202073    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:16:59 functional-276405 kubelet[3858]: E1014 20:16:59.202618    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:17:08 functional-276405 kubelet[3858]: E1014 20:17:08.201712    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:17:14 functional-276405 kubelet[3858]: E1014 20:17:14.202299    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:17:19 functional-276405 kubelet[3858]: E1014 20:17:19.202062    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:17:27 functional-276405 kubelet[3858]: E1014 20:17:27.201675    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:17:32 functional-276405 kubelet[3858]: E1014 20:17:32.201920    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:17:39 functional-276405 kubelet[3858]: E1014 20:17:39.202173    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:17:46 functional-276405 kubelet[3858]: E1014 20:17:46.202484    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:17:54 functional-276405 kubelet[3858]: E1014 20:17:54.201943    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:18:00 functional-276405 kubelet[3858]: E1014 20:18:00.215123    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:18:08 functional-276405 kubelet[3858]: E1014 20:18:08.202080    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:18:14 functional-276405 kubelet[3858]: E1014 20:18:14.202184    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:18:20 functional-276405 kubelet[3858]: E1014 20:18:20.201954    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:18:26 functional-276405 kubelet[3858]: E1014 20:18:26.201509    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:18:31 functional-276405 kubelet[3858]: E1014 20:18:31.204141    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:18:37 functional-276405 kubelet[3858]: E1014 20:18:37.203616    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	Oct 14 20:18:42 functional-276405 kubelet[3858]: E1014 20:18:42.201581    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hl2sc" podUID="a1eb48f4-3b32-46db-bae4-dfbd2182ab3b"
	Oct 14 20:18:51 functional-276405 kubelet[3858]: E1014 20:18:51.202207    3858 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f57x4" podUID="fbba014f-8d90-4f43-a203-8e883b297f66"
	
	
	==> storage-provisioner [8b74fe836774bc9729df28f59e52a5dfaa3fdfd9d302ea018712346832b0eea1] <==
	I1014 20:07:29.227594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 20:07:33.169358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 20:07:33.169500       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 20:07:33.195837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:36.651021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:40.910970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:44.510106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:47.563493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:50.586384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:50.595993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 20:07:50.596248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 20:07:50.596813       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ec60b02-f0f1-4cfe-943c-40a3be315190", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-276405_06576e37-8714-4b9d-8d13-c0720c29c28f became leader
	I1014 20:07:50.599830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-276405_06576e37-8714-4b9d-8d13-c0720c29c28f!
	W1014 20:07:50.609777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:50.620755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 20:07:50.700258       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-276405_06576e37-8714-4b9d-8d13-c0720c29c28f!
	W1014 20:07:52.623649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:52.628924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:54.632560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:54.641696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:56.645084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:07:56.654301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e243e1aa88ddcebf5d213c9853dadd83f9a770ce94bfd9b3ffa697429a04d9ed] <==
	W1014 20:18:30.903441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:32.907501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:32.911799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:34.914515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:34.920513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:36.923004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:36.928138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:38.931567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:38.935928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:40.938779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:40.943228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:42.946692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:42.954554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:44.958210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:44.962707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:46.965212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:46.969478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:48.972400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:48.978852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:50.981169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:50.985692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:53.006253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:53.017541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:55.021835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:18:55.029637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-276405 -n functional-276405
helpers_test.go:269: (dbg) Run:  kubectl --context functional-276405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-f57x4 hello-node-connect-7d85dfc575-hl2sc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-276405 describe pod hello-node-75c85bcc94-f57x4 hello-node-connect-7d85dfc575-hl2sc
helpers_test.go:290: (dbg) kubectl --context functional-276405 describe pod hello-node-75c85bcc94-f57x4 hello-node-connect-7d85dfc575-hl2sc:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-f57x4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-276405/192.168.49.2
	Start Time:       Tue, 14 Oct 2025 20:09:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtz9k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wtz9k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f57x4 to functional-276405
	  Normal   Pulling    6m45s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m45s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m45s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-hl2sc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-276405/192.168.49.2
	Start Time:       Tue, 14 Oct 2025 20:08:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqbqm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rqbqm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hl2sc to functional-276405
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-276405 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-276405 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-f57x4" [fbba014f-8d90-4f43-a203-8e883b297f66] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1014 20:09:28.264742  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:11:44.392728  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:12:12.106107  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:44.392509  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-276405 -n functional-276405
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-14 20:19:08.410517976 +0000 UTC m=+1305.828671764
functional_test.go:1460: (dbg) Run:  kubectl --context functional-276405 describe po hello-node-75c85bcc94-f57x4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-276405 describe po hello-node-75c85bcc94-f57x4 -n default:
Name:             hello-node-75c85bcc94-f57x4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-276405/192.168.49.2
Start Time:       Tue, 14 Oct 2025 20:09:07 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtz9k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wtz9k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f57x4 to functional-276405
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-276405 logs hello-node-75c85bcc94-f57x4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-276405 logs hello-node-75c85bcc94-f57x4 -n default: exit status 1 (124.128981ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-f57x4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-276405 logs hello-node-75c85bcc94-f57x4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 service --namespace=default --https --url hello-node: exit status 115 (476.800242ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31446
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-276405 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 service hello-node --url --format={{.IP}}: exit status 115 (541.28104ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-276405 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 service hello-node --url: exit status 115 (534.913398ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31446
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-276405 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31446
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image load --daemon kicbase/echo-server:functional-276405 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 image load --daemon kicbase/echo-server:functional-276405 --alsologtostderr: (2.03505527s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-276405" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image load --daemon kicbase/echo-server:functional-276405 --alsologtostderr
2025/10/14 20:19:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 image load --daemon kicbase/echo-server:functional-276405 --alsologtostderr: (1.10288266s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-276405" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-276405
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image load --daemon kicbase/echo-server:functional-276405 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-276405" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image save kicbase/echo-server:functional-276405 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1014 20:19:23.156837  575621 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:19:23.158143  575621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:23.158194  575621 out.go:374] Setting ErrFile to fd 2...
	I1014 20:19:23.158213  575621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:23.158514  575621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:19:23.159229  575621 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:19:23.159417  575621 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:19:23.159936  575621 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
	I1014 20:19:23.183724  575621 ssh_runner.go:195] Run: systemctl --version
	I1014 20:19:23.183905  575621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
	I1014 20:19:23.212488  575621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
	I1014 20:19:23.319111  575621 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1014 20:19:23.319170  575621 cache_images.go:254] Failed to load cached images for "functional-276405": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1014 20:19:23.319194  575621 cache_images.go:266] failed pushing to: functional-276405

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-276405
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image save --daemon kicbase/echo-server:functional-276405 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-276405
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-276405: exit status 1 (17.382125ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-276405

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-276405

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (532.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 stop --alsologtostderr -v 5: (26.97032245s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 start --wait true --alsologtostderr -v 5
E1014 20:26:25.164476  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:44.392687  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:28:41.304645  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:29:09.006430  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:44.392748  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:33:41.304693  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-378189 start --wait true --alsologtostderr -v 5: exit status 80 (8m22.056879248s)

                                                
                                                
-- stdout --
	* [ha-378189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-378189" primary control-plane node in "ha-378189" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-378189-m02" control-plane node in "ha-378189" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-378189-m03" control-plane node in "ha-378189" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:25:43.361106  594320 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:25:43.361469  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361502  594320 out.go:374] Setting ErrFile to fd 2...
	I1014 20:25:43.361522  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361834  594320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:25:43.362250  594320 out.go:368] Setting JSON to false
	I1014 20:25:43.363143  594320 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11294,"bootTime":1760462250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:25:43.363234  594320 start.go:141] virtualization:  
	I1014 20:25:43.366372  594320 out.go:179] * [ha-378189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:25:43.370273  594320 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:25:43.370502  594320 notify.go:220] Checking for updates...
	I1014 20:25:43.376212  594320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:25:43.379153  594320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:43.382007  594320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:25:43.384781  594320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:25:43.387595  594320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:25:43.391018  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:43.391127  594320 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:25:43.417968  594320 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:25:43.418090  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.479361  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.470111217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.479472  594320 docker.go:318] overlay module found
	I1014 20:25:43.482541  594320 out.go:179] * Using the docker driver based on existing profile
	I1014 20:25:43.485265  594320 start.go:305] selected driver: docker
	I1014 20:25:43.485282  594320 start.go:925] validating driver "docker" against &{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.485422  594320 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:25:43.485520  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.540613  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.531183177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.541069  594320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:25:43.541095  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:43.541149  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:43.541196  594320 start.go:349] cluster config:
	{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.546335  594320 out.go:179] * Starting "ha-378189" primary control-plane node in "ha-378189" cluster
	I1014 20:25:43.549190  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:43.552105  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:43.554956  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:43.555008  594320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 20:25:43.555023  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:43.555058  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:43.555118  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:43.555128  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:43.555280  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.574403  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:43.574427  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:43.574448  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:43.574472  594320 start.go:360] acquireMachinesLock for ha-378189: {Name:mk72998cded3d94d4d24b35423c9bb42df68dec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:43.574536  594320 start.go:364] duration metric: took 38.794µs to acquireMachinesLock for "ha-378189"
	I1014 20:25:43.574566  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:43.574582  594320 fix.go:54] fixHost starting: 
	I1014 20:25:43.574842  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.592022  594320 fix.go:112] recreateIfNeeded on ha-378189: state=Stopped err=<nil>
	W1014 20:25:43.592054  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:43.595275  594320 out.go:252] * Restarting existing docker container for "ha-378189" ...
	I1014 20:25:43.595377  594320 cli_runner.go:164] Run: docker start ha-378189
	I1014 20:25:43.835199  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.862801  594320 kic.go:430] container "ha-378189" state is running.
	I1014 20:25:43.863545  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:43.891763  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.891990  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:43.892050  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:43.915656  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:43.915978  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:43.915987  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:43.916996  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:25:47.063750  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.063771  594320 ubuntu.go:182] provisioning hostname "ha-378189"
	I1014 20:25:47.063837  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.081648  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.081949  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.081959  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189 && echo "ha-378189" | sudo tee /etc/hostname
	I1014 20:25:47.237261  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.237361  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.256023  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.256364  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.256388  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:47.404333  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:47.404362  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:47.404398  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:47.404407  594320 provision.go:84] configureAuth start
	I1014 20:25:47.404474  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:47.421674  594320 provision.go:143] copyHostCerts
	I1014 20:25:47.421717  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421746  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:47.421765  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421841  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:47.421929  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421951  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:47.421956  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421989  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:47.422033  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422054  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:47.422061  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422086  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:47.422140  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189 san=[127.0.0.1 192.168.49.2 ha-378189 localhost minikube]
	I1014 20:25:47.717715  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:47.717791  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:47.717832  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.737857  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:47.839808  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:47.839867  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:47.856581  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:47.856687  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:25:47.873767  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:47.873865  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:47.890677  594320 provision.go:87] duration metric: took 486.243045ms to configureAuth
	I1014 20:25:47.890712  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:47.890978  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:47.891083  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.907847  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.908141  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.908155  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:48.237643  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:48.237669  594320 machine.go:96] duration metric: took 4.345669995s to provisionDockerMachine
	I1014 20:25:48.237681  594320 start.go:293] postStartSetup for "ha-378189" (driver="docker")
	I1014 20:25:48.237692  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:48.237753  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:48.237810  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.258809  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.360095  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:48.363412  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:48.363442  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:48.363453  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:48.363504  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:48.363585  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:48.363596  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:48.363699  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:48.371847  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:48.388783  594320 start.go:296] duration metric: took 151.087012ms for postStartSetup
	I1014 20:25:48.388858  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:48.388903  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.405740  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.505335  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:48.509958  594320 fix.go:56] duration metric: took 4.935376593s for fixHost
	I1014 20:25:48.509986  594320 start.go:83] releasing machines lock for "ha-378189", held for 4.93543709s
	I1014 20:25:48.510055  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:48.526206  594320 ssh_runner.go:195] Run: cat /version.json
	I1014 20:25:48.526270  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.526531  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:48.526586  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.548290  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.549796  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.648126  594320 ssh_runner.go:195] Run: systemctl --version
	I1014 20:25:48.747253  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:48.786090  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:48.790656  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:48.790738  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:48.798572  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:48.798598  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:48.798629  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:48.798676  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:48.814126  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:48.827342  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:48.827404  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:48.843057  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:48.856133  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:48.960636  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:49.074665  594320 docker.go:234] disabling docker service ...
	I1014 20:25:49.074816  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:49.089288  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:49.101798  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:49.217149  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:49.331613  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:49.345157  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:49.358849  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:49.358915  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.367341  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:49.367402  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.377099  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.390988  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.399943  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:49.407841  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.416280  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.424463  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.433237  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:49.440777  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:49.447969  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:49.570863  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:49.701384  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:49.701490  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:49.705417  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:49.705523  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:49.708808  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:49.731837  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:49.731976  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.758582  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.794917  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:25:49.797793  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:25:49.814631  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:25:49.818852  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:49.829148  594320 kubeadm.go:883] updating cluster {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:25:49.829306  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:49.829365  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.862281  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.862307  594320 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:25:49.862361  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.887278  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.887305  594320 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:25:49.887314  594320 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:25:49.887406  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:25:49.887491  594320 ssh_runner.go:195] Run: crio config
	I1014 20:25:49.950026  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:49.950049  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:49.950075  594320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:25:49.950098  594320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-378189 NodeName:ha-378189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:25:49.950235  594320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-378189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:25:49.950258  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:25:49.950319  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:25:49.962120  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:49.962220  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:25:49.962284  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:25:49.970701  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:25:49.970828  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:25:49.977995  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:25:49.990750  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:25:50.005887  594320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1014 20:25:50.021284  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:25:50.035924  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:25:50.040106  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:50.050560  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:50.176106  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:25:50.192829  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.2
	I1014 20:25:50.192902  594320 certs.go:195] generating shared ca certs ...
	I1014 20:25:50.192934  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:50.193147  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:25:50.193245  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:25:50.193272  594320 certs.go:257] generating profile certs ...
	I1014 20:25:50.193429  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:25:50.193492  594320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1
	I1014 20:25:50.193549  594320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1014 20:25:51.264129  594320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 ...
	I1014 20:25:51.264211  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1: {Name:mk888fb5869312f9302bb4d116c9798036158a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264486  594320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 ...
	I1014 20:25:51.264521  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1: {Name:mk37f1a7286bb5552e328ef092b8b679d52c9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264668  594320 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt
	I1014 20:25:51.264858  594320 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key
	I1014 20:25:51.265044  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:25:51.265080  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:25:51.265114  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:25:51.265162  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:25:51.265197  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:25:51.265228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:25:51.265275  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:25:51.265318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:25:51.265346  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:25:51.265429  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:25:51.265484  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:25:51.265514  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:25:51.265568  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:25:51.265617  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:25:51.265670  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:25:51.265740  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:51.265797  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.266224  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.266279  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.266869  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:25:51.293705  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:25:51.326310  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:25:51.350664  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:25:51.384349  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:25:51.428733  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:25:51.452875  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:25:51.478413  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:25:51.512863  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:25:51.549423  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:25:51.576302  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:25:51.612732  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:25:51.637495  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:25:51.649332  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:25:51.668042  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674160  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674241  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.728425  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:25:51.739014  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:25:51.750524  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757453  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757551  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.825539  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:25:51.837092  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:25:51.848450  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853823  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853915  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.903510  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:25:51.914011  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:25:51.919570  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:25:51.968390  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:25:52.020296  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:25:52.082590  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:25:52.139873  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:25:52.202457  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:25:52.250943  594320 kubeadm.go:400] StartCluster: {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:52.251111  594320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:25:52.251193  594320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:25:52.305687  594320 cri.go:89] found id: "a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59"
	I1014 20:25:52.305737  594320 cri.go:89] found id: "29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	I1014 20:25:52.305743  594320 cri.go:89] found id: "6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b"
	I1014 20:25:52.305748  594320 cri.go:89] found id: "1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42"
	I1014 20:25:52.305751  594320 cri.go:89] found id: "e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa"
	I1014 20:25:52.305756  594320 cri.go:89] found id: ""
	I1014 20:25:52.305830  594320 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 20:25:52.324825  594320 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:25:52Z" level=error msg="open /run/runc: no such file or directory"
	I1014 20:25:52.324939  594320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:25:52.340043  594320 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:25:52.340101  594320 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:25:52.340184  594320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:25:52.356489  594320 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:52.357142  594320 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-378189" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.357314  594320 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "ha-378189" cluster setting kubeconfig missing "ha-378189" context setting]
	I1014 20:25:52.357700  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.358694  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:25:52.359335  594320 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:25:52.359405  594320 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:25:52.359415  594320 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:25:52.359420  594320 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:25:52.359425  594320 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:25:52.359381  594320 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:25:52.359974  594320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:25:52.376163  594320 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:25:52.376206  594320 kubeadm.go:601] duration metric: took 36.093342ms to restartPrimaryControlPlane
	I1014 20:25:52.376215  594320 kubeadm.go:402] duration metric: took 125.292882ms to StartCluster
	I1014 20:25:52.376231  594320 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.376308  594320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.377241  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.377523  594320 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:25:52.377556  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:25:52.377564  594320 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:25:52.378287  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.384056  594320 out.go:179] * Enabled addons: 
	I1014 20:25:52.386972  594320 addons.go:514] duration metric: took 9.377075ms for enable addons: enabled=[]
	I1014 20:25:52.387045  594320 start.go:246] waiting for cluster config update ...
	I1014 20:25:52.387060  594320 start.go:255] writing updated cluster config ...
	I1014 20:25:52.390584  594320 out.go:203] 
	I1014 20:25:52.393809  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.393941  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.397423  594320 out.go:179] * Starting "ha-378189-m02" control-plane node in "ha-378189" cluster
	I1014 20:25:52.400301  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:52.403231  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:52.406009  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:52.406048  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:52.406156  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:52.406172  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:52.406316  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.406581  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:52.433409  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:52.433441  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:52.433455  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:52.433497  594320 start.go:360] acquireMachinesLock for ha-378189-m02: {Name:mk92416bb80d0c49acbc35d0990b125674d2e01f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:52.433567  594320 start.go:364] duration metric: took 39.312µs to acquireMachinesLock for "ha-378189-m02"
	I1014 20:25:52.433592  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:52.433606  594320 fix.go:54] fixHost starting: m02
	I1014 20:25:52.433897  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.466174  594320 fix.go:112] recreateIfNeeded on ha-378189-m02: state=Stopped err=<nil>
	W1014 20:25:52.466206  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:52.469507  594320 out.go:252] * Restarting existing docker container for "ha-378189-m02" ...
	I1014 20:25:52.469612  594320 cli_runner.go:164] Run: docker start ha-378189-m02
	I1014 20:25:52.947382  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.976894  594320 kic.go:430] container "ha-378189-m02" state is running.
	I1014 20:25:52.977343  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:53.015718  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:53.015986  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:53.016064  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:53.055604  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:53.055958  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:53.055970  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:53.056832  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56096->127.0.0.1:33552: read: connection reset by peer
	I1014 20:25:56.232015  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.232096  594320 ubuntu.go:182] provisioning hostname "ha-378189-m02"
	I1014 20:25:56.232187  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.268557  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.268869  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.268879  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m02 && echo "ha-378189-m02" | sudo tee /etc/hostname
	I1014 20:25:56.465757  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.465917  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.491913  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.492225  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.492241  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:56.672985  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:56.673014  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:56.673042  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:56.673051  594320 provision.go:84] configureAuth start
	I1014 20:25:56.673146  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:56.705878  594320 provision.go:143] copyHostCerts
	I1014 20:25:56.705922  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.705956  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:56.705963  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.706040  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:56.706163  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706183  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:56.706187  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706214  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:56.706263  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706278  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:56.706282  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706307  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:56.706362  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m02 san=[127.0.0.1 192.168.49.3 ha-378189-m02 localhost minikube]
	I1014 20:25:57.297271  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:57.297355  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:57.297408  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.315464  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:57.428975  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:57.429037  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:25:57.452816  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:57.452885  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:57.471294  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:57.471361  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:57.489442  594320 provision.go:87] duration metric: took 816.37455ms to configureAuth
	I1014 20:25:57.489480  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:57.489743  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:57.489867  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.514270  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:57.514577  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:57.514591  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:57.886060  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:57.886080  594320 machine.go:96] duration metric: took 4.870076381s to provisionDockerMachine
	I1014 20:25:57.886091  594320 start.go:293] postStartSetup for "ha-378189-m02" (driver="docker")
	I1014 20:25:57.886102  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:57.886163  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:57.886208  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.904127  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.005798  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:58.011149  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:58.011183  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:58.011195  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:58.011261  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:58.011342  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:58.011354  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:58.011454  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:58.019728  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:58.038734  594320 start.go:296] duration metric: took 152.626218ms for postStartSetup
	I1014 20:25:58.038827  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:58.038875  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.056476  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.157712  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:58.162651  594320 fix.go:56] duration metric: took 5.729042528s for fixHost
	I1014 20:25:58.162676  594320 start.go:83] releasing machines lock for "ha-378189-m02", held for 5.729094813s
	I1014 20:25:58.162747  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:58.183647  594320 out.go:179] * Found network options:
	I1014 20:25:58.186478  594320 out.go:179]   - NO_PROXY=192.168.49.2
	W1014 20:25:58.189324  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:25:58.189372  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:25:58.189452  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:58.189507  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.189778  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:58.189920  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.209437  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.218677  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.415461  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:58.420177  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:58.420296  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:58.428278  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:58.428300  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:58.428405  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:58.428468  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:58.444844  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:58.457892  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:58.457983  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:58.475762  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:58.489635  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:58.621200  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:58.796095  594320 docker.go:234] disabling docker service ...
	I1014 20:25:58.796209  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:58.827751  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:58.844957  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:59.096618  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:59.311841  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:59.331768  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:59.354924  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:59.355038  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.367271  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:59.367384  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.382002  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.395622  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.410728  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:59.426726  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.444708  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.458277  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.469637  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:59.481336  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:59.491928  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:59.720255  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:59.958409  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:59.958540  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:59.962293  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:59.962423  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:59.966088  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:59.991722  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:59.991842  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.113230  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.165429  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:26:00.170886  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:26:00.174530  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:26:00.218865  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:26:00.228535  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:00.249249  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:26:00.249528  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:00.249874  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:26:00.286648  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:26:00.286974  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.3
	I1014 20:26:00.287300  594320 certs.go:195] generating shared ca certs ...
	I1014 20:26:00.287327  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:26:00.287479  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:26:00.287528  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:26:00.287537  594320 certs.go:257] generating profile certs ...
	I1014 20:26:00.287629  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:26:00.287704  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.31763e79
	I1014 20:26:00.287740  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:26:00.287749  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:26:00.287763  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:26:00.287775  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:26:00.287791  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:26:00.287805  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:26:00.287818  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:26:00.287829  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:26:00.287851  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:26:00.287914  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:26:00.287952  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:26:00.287961  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:26:00.287988  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:26:00.288014  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:26:00.288040  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:26:00.288093  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:00.288127  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.288141  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.288152  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.288213  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:26:00.323762  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:26:00.428739  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:26:00.433364  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:26:00.442968  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:26:00.447387  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:26:00.457017  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:26:00.461913  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:26:00.472168  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:26:00.476286  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:26:00.486124  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:26:00.490540  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:26:00.499485  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:26:00.503069  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:26:00.511944  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:26:00.532142  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:26:00.549917  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:26:00.568202  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:26:00.586156  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:26:00.603803  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:26:00.621171  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:26:00.638805  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:26:00.656530  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:26:00.674792  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:26:00.693738  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:26:00.710998  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:26:00.723268  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:26:00.736291  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:26:00.748826  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:26:00.764190  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:26:00.786939  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:26:00.807509  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:26:00.822235  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:26:00.829716  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:26:00.839312  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844086  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844196  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.887371  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:26:00.895171  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:26:00.903441  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907140  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907258  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.947880  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:26:00.955732  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:26:00.964076  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967857  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967921  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:01.009173  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:26:01.017414  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:26:01.021430  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:26:01.062823  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:26:01.103976  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:26:01.145434  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:26:01.188131  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:26:01.229876  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:26:01.271831  594320 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1014 20:26:01.271953  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:26:01.271983  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:26:01.272034  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:26:01.286472  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:26:01.286553  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:26:01.286643  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:26:01.295904  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:26:01.296015  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:26:01.304007  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:26:01.318122  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:26:01.331599  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:26:01.348453  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:26:01.353197  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:01.364628  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.509428  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.524168  594320 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:26:01.524671  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:01.530763  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:26:01.533842  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.682729  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.697778  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:26:01.697855  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:26:01.698087  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m02" to be "Ready" ...
	W1014 20:26:11.701290  594320 node_ready.go:55] error getting node "ha-378189-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02": net/http: TLS handshake timeout
	I1014 20:26:15.977480  594320 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02"
	I1014 20:26:19.960495  594320 node_ready.go:49] node "ha-378189-m02" is "Ready"
	I1014 20:26:19.960524  594320 node_ready.go:38] duration metric: took 18.262413251s for node "ha-378189-m02" to be "Ready" ...
	I1014 20:26:19.960537  594320 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:26:19.960604  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.461531  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.961464  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.461540  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.961480  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.461610  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.961486  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.977774  594320 api_server.go:72] duration metric: took 21.453514103s to wait for apiserver process to appear ...
	I1014 20:26:22.977840  594320 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:26:22.977876  594320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:26:22.986624  594320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:26:22.988140  594320 api_server.go:141] control plane version: v1.34.1
	I1014 20:26:22.988198  594320 api_server.go:131] duration metric: took 10.336359ms to wait for apiserver health ...
	I1014 20:26:22.988222  594320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:26:23.017914  594320 system_pods.go:59] 26 kube-system pods found
	I1014 20:26:23.018013  594320 system_pods.go:61] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018041  594320 system_pods.go:61] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018081  594320 system_pods.go:61] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.018111  594320 system_pods.go:61] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.018134  594320 system_pods.go:61] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.018156  594320 system_pods.go:61] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.018187  594320 system_pods.go:61] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.018213  594320 system_pods.go:61] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.018237  594320 system_pods.go:61] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.018263  594320 system_pods.go:61] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.018296  594320 system_pods.go:61] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.018325  594320 system_pods.go:61] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.018353  594320 system_pods.go:61] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.018378  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.018412  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.018440  594320 system_pods.go:61] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.018463  594320 system_pods.go:61] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.018488  594320 system_pods.go:61] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.018518  594320 system_pods.go:61] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.018546  594320 system_pods.go:61] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.018569  594320 system_pods.go:61] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.018590  594320 system_pods.go:61] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.018622  594320 system_pods.go:61] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.018648  594320 system_pods.go:61] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.018672  594320 system_pods.go:61] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.018696  594320 system_pods.go:61] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.018729  594320 system_pods.go:74] duration metric: took 30.48728ms to wait for pod list to return data ...
	I1014 20:26:23.018756  594320 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:26:23.026171  594320 default_sa.go:45] found service account: "default"
	I1014 20:26:23.026235  594320 default_sa.go:55] duration metric: took 7.458111ms for default service account to be created ...
	I1014 20:26:23.026258  594320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:26:23.039389  594320 system_pods.go:86] 26 kube-system pods found
	I1014 20:26:23.039477  594320 system_pods.go:89] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039504  594320 system_pods.go:89] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039543  594320 system_pods.go:89] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.039568  594320 system_pods.go:89] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.039589  594320 system_pods.go:89] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.039615  594320 system_pods.go:89] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.039649  594320 system_pods.go:89] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.039674  594320 system_pods.go:89] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.039695  594320 system_pods.go:89] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.039724  594320 system_pods.go:89] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.039755  594320 system_pods.go:89] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.039784  594320 system_pods.go:89] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.039823  594320 system_pods.go:89] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.039845  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.039882  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.039900  594320 system_pods.go:89] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.039921  594320 system_pods.go:89] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.039955  594320 system_pods.go:89] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.039975  594320 system_pods.go:89] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.039993  594320 system_pods.go:89] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.040012  594320 system_pods.go:89] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.040031  594320 system_pods.go:89] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.040060  594320 system_pods.go:89] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.040088  594320 system_pods.go:89] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.040111  594320 system_pods.go:89] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.040136  594320 system_pods.go:89] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.040170  594320 system_pods.go:126] duration metric: took 13.891803ms to wait for k8s-apps to be running ...
	I1014 20:26:23.040201  594320 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:26:23.040293  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:26:23.063946  594320 system_svc.go:56] duration metric: took 23.73711ms WaitForService to wait for kubelet
	I1014 20:26:23.064017  594320 kubeadm.go:586] duration metric: took 21.539761018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:26:23.064051  594320 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:26:23.082262  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082345  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082370  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082388  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082423  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082450  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082472  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082494  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082525  594320 node_conditions.go:105] duration metric: took 18.445212ms to run NodePressure ...
	I1014 20:26:23.082553  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:26:23.082589  594320 start.go:255] writing updated cluster config ...
	I1014 20:26:23.086128  594320 out.go:203] 
	I1014 20:26:23.089286  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:23.089471  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.092927  594320 out.go:179] * Starting "ha-378189-m03" control-plane node in "ha-378189" cluster
	I1014 20:26:23.096535  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:26:23.099560  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:26:23.102321  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:26:23.102371  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:26:23.102397  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:26:23.102501  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:26:23.102517  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:26:23.102654  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.136476  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:26:23.136495  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:26:23.136508  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:26:23.136531  594320 start.go:360] acquireMachinesLock for ha-378189-m03: {Name:mk212b55d7ba2de9e9f1edd93e6a58b4ed6f2ea7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:26:23.136583  594320 start.go:364] duration metric: took 37.113µs to acquireMachinesLock for "ha-378189-m03"
	I1014 20:26:23.136602  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:26:23.136607  594320 fix.go:54] fixHost starting: m03
	I1014 20:26:23.136861  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.180150  594320 fix.go:112] recreateIfNeeded on ha-378189-m03: state=Stopped err=<nil>
	W1014 20:26:23.180176  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:26:23.183408  594320 out.go:252] * Restarting existing docker container for "ha-378189-m03" ...
	I1014 20:26:23.183496  594320 cli_runner.go:164] Run: docker start ha-378189-m03
	I1014 20:26:23.583316  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.617146  594320 kic.go:430] container "ha-378189-m03" state is running.
	I1014 20:26:23.617687  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:23.654101  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.654351  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:26:23.654417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:23.679159  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:23.679663  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:23.679677  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:26:23.680477  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:26:26.929139  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:26.929205  594320 ubuntu.go:182] provisioning hostname "ha-378189-m03"
	I1014 20:26:26.929306  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:26.953169  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:26.953484  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:26.953497  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m03 && echo "ha-378189-m03" | sudo tee /etc/hostname
	I1014 20:26:27.366006  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:27.366194  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:27.395992  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:27.396293  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:27.396309  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:26:27.693852  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:26:27.693921  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:26:27.693958  594320 ubuntu.go:190] setting up certificates
	I1014 20:26:27.694000  594320 provision.go:84] configureAuth start
	I1014 20:26:27.694087  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:27.718087  594320 provision.go:143] copyHostCerts
	I1014 20:26:27.718133  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718165  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:26:27.718172  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718249  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:26:27.718323  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718348  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:26:27.718353  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718380  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:26:27.718424  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718444  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:26:27.718452  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718478  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:26:27.718527  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m03 san=[127.0.0.1 192.168.49.4 ha-378189-m03 localhost minikube]
	I1014 20:26:28.885539  594320 provision.go:177] copyRemoteCerts
	I1014 20:26:28.885609  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:26:28.885857  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:28.909377  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:29.092351  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:26:29.092437  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:26:29.150396  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:26:29.150575  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:26:29.212427  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:26:29.212488  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:26:29.291296  594320 provision.go:87] duration metric: took 1.597262612s to configureAuth
	I1014 20:26:29.291319  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:26:29.291554  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:29.291658  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.325450  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:29.325769  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:29.325785  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:26:29.937523  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:26:29.937544  594320 machine.go:96] duration metric: took 6.283184743s to provisionDockerMachine
	I1014 20:26:29.937556  594320 start.go:293] postStartSetup for "ha-378189-m03" (driver="docker")
	I1014 20:26:29.937573  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:26:29.937654  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:26:29.937717  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.965387  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.103245  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:26:30.108041  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:26:30.108068  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:26:30.108080  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:26:30.108144  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:26:30.108221  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:26:30.108228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:26:30.108351  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:26:30.118723  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:30.147273  594320 start.go:296] duration metric: took 209.701827ms for postStartSetup
	I1014 20:26:30.147363  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:26:30.147417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.167399  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.286490  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:26:30.294616  594320 fix.go:56] duration metric: took 7.158001675s for fixHost
	I1014 20:26:30.294649  594320 start.go:83] releasing machines lock for "ha-378189-m03", held for 7.158058317s
	I1014 20:26:30.294720  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:30.332338  594320 out.go:179] * Found network options:
	I1014 20:26:30.335270  594320 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1014 20:26:30.338218  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338252  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338276  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338286  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:26:30.338361  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:26:30.338399  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.338666  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:26:30.338727  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.371445  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.378598  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.685955  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:26:30.710071  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:26:30.710156  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:26:30.762990  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:26:30.763023  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:26:30.763054  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:26:30.763113  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:26:30.789796  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:26:30.813445  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:26:30.813530  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:26:30.839481  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:26:30.861859  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:26:31.219302  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:26:31.622468  594320 docker.go:234] disabling docker service ...
	I1014 20:26:31.622554  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:26:31.669472  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:26:31.699352  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:26:32.097354  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:26:32.482606  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:26:32.523674  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:26:32.577860  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:26:32.577931  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.625402  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:26:32.625482  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.642162  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.655785  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.690980  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:26:32.710947  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.738596  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.764228  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.794648  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:26:32.819907  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:26:32.841183  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:33.263701  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:28:03.725730  594320 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.461946041s)
	I1014 20:28:03.725755  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:28:03.725818  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:28:03.730985  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:28:03.731051  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:28:03.734613  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:28:03.777080  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:28:03.777171  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.808074  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.842942  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:28:03.845779  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:28:03.848732  594320 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1014 20:28:03.851762  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:28:03.869115  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:28:03.872847  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:03.883220  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:28:03.883459  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:03.883722  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:28:03.902527  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:28:03.902796  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.4
	I1014 20:28:03.902811  594320 certs.go:195] generating shared ca certs ...
	I1014 20:28:03.902825  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:28:03.902952  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:28:03.902998  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:28:03.903010  594320 certs.go:257] generating profile certs ...
	I1014 20:28:03.903083  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:28:03.903150  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.02897361
	I1014 20:28:03.903194  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:28:03.903292  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:28:03.903318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:28:03.903337  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:28:03.903356  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:28:03.903373  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:28:03.903388  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:28:03.903405  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:28:03.903422  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:28:03.903477  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:28:03.903513  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:28:03.903525  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:28:03.903550  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:28:03.903577  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:28:03.903603  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:28:03.903647  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:28:03.903678  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:03.903693  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:28:03.903709  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:28:03.903766  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:28:03.921249  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:28:04.020701  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:28:04.026586  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:28:04.037048  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:28:04.040988  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:28:04.050562  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:28:04.054508  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:28:04.063039  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:28:04.066687  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:28:04.075046  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:28:04.079077  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:28:04.087651  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:28:04.091714  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:28:04.099957  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:28:04.120044  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:28:04.141157  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:28:04.160012  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:28:04.182538  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:28:04.206993  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:28:04.231475  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:28:04.253537  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:28:04.275116  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:28:04.294127  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:28:04.313010  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:28:04.337638  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:28:04.350377  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:28:04.365979  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:28:04.380227  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:28:04.402519  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:28:04.428480  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:28:04.444300  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:28:04.465409  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:28:04.472158  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:28:04.481746  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.485919  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.486029  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.534226  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:28:04.542709  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:28:04.552210  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.557889  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.558010  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.600600  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:28:04.608591  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:28:04.617463  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621083  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621172  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.663633  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:28:04.671866  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:28:04.675585  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:28:04.717923  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:28:04.759364  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:28:04.801264  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:28:04.842553  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:28:04.888303  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:28:04.930552  594320 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1014 20:28:04.930687  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:28:04.930727  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:28:04.930788  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:28:04.948125  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:28:04.948234  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:28:04.948301  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:28:04.956444  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:28:04.956519  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:28:04.964479  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:28:04.980043  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:28:04.994905  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:28:05.014277  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:28:05.019048  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:05.032269  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.165397  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.183614  594320 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:28:05.184053  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:05.189131  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:28:05.192051  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.332393  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.346845  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:28:05.346924  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:28:05.347150  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m03" to be "Ready" ...
	W1014 20:28:07.351440  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:09.352350  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:11.851687  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:13.851750  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:16.351273  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:18.351732  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:20.354256  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:22.851172  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:24.851281  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:27.351420  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:29.851348  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:32.351357  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:34.851229  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:36.852278  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:39.352122  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:41.851851  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:44.351532  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:46.850950  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:49.352299  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:51.850817  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:53.851323  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:56.350480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:58.351469  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:00.361289  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:02.851458  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:05.350476  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:07.350815  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:09.351365  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:11.851059  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:13.851563  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:16.351339  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:18.850245  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:20.855138  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:23.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:25.852768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:28.351284  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:30.351537  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:32.851539  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:35.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:37.850912  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:39.851077  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:41.851598  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:44.351457  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:46.351534  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:48.850734  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:50.851231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:52.851432  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:55.351286  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:57.351688  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:59.851092  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:01.852021  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:04.350799  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:06.851328  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:09.351536  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:11.353512  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:13.851897  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:15.852000  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:18.351490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:20.851616  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:23.351164  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:25.351400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:27.352193  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:29.850751  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:31.851102  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:34.350486  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:36.351583  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:38.851490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:41.350742  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:43.357724  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:45.850854  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:47.851206  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:50.350789  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:52.850626  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:54.851276  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:57.350761  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:59.351538  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:01.851391  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:04.350763  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:06.850386  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:08.851237  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:11.351648  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:13.850996  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:15.851345  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:18.351324  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:20.355277  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:22.851213  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:25.350735  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:27.850574  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:29.856637  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:32.350493  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:34.351300  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:36.353689  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:38.851719  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:40.852010  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:43.351491  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:45.354360  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:47.851118  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:49.851408  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:52.351137  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:54.851402  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:57.350516  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:59.351590  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:01.851121  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:04.350628  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:06.350691  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:08.351228  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:10.851190  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:13.351117  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:15.850576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:17.851400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:20.351576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:22.851057  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:25.351322  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:27.355638  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:29.851145  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:31.851487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:34.352259  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:36.850522  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:39.351831  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:41.850768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:44.350717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:46.351296  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:48.850913  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:50.851023  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:53.350835  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:55.353231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:57.851446  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:00.354501  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:02.850997  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:05.350786  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:07.350850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:09.351182  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:11.850869  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:13.851809  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:16.351480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:18.850797  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:20.851760  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:23.351587  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:25.850979  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:28.351624  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:30.851397  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:33.351390  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:35.351573  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:37.850906  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:40.351625  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:42.852099  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:45.352080  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:47.850966  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:49.851115  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:51.851586  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:54.350406  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:56.350741  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:58.850717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:00.850850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:02.851136  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	I1014 20:34:05.347342  594320 node_ready.go:38] duration metric: took 6m0.000162726s for node "ha-378189-m03" to be "Ready" ...
	I1014 20:34:05.350610  594320 out.go:203] 
	W1014 20:34:05.353525  594320 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:34:05.353550  594320 out.go:285] * 
	* 
	W1014 20:34:05.355696  594320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:34:05.358382  594320 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-378189 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-378189
helpers_test.go:243: (dbg) docker inspect ha-378189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	        "Created": "2025-10-14T20:19:36.721865815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:25:43.632516423Z",
	            "FinishedAt": "2025-10-14T20:25:42.992747077Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hosts",
	        "LogPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2-json.log",
	        "Name": "/ha-378189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-378189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-378189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	                "LowerDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-378189",
	                "Source": "/var/lib/docker/volumes/ha-378189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-378189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-378189",
	                "name.minikube.sigs.k8s.io": "ha-378189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05a8c6a7d89e6c1e1256598987cf30badae51e341335b4f7acba8aa6e9381f33",
	            "SandboxKey": "/var/run/docker/netns/05a8c6a7d89e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-378189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:83:31:11:3e:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b3483fd8ccce49d2e0fd5e22713527a7e829a5fed87dbabc5f1c994fe1baa1e7",
	                    "EndpointID": "ba620ff7fedd67e5484d20b7d869783e6c44a023d61f3b6e243822cf66dbdde7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-378189",
	                        "bbe33f2b58f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-378189 -n ha-378189
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 logs -n 25: (1.593475838s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp testdata/cp-test.txt ha-378189-m04:/home/docker/cp-test.txt                                                             │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m04.txt │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m04_ha-378189.txt                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189.txt                                                 │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node start m02 --alsologtostderr -v 5                                                                                      │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:25 UTC │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ stop    │ ha-378189 stop --alsologtostderr -v 5                                                                                                │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │ 14 Oct 25 20:25 UTC │
	│ start   │ ha-378189 start --wait true --alsologtostderr -v 5                                                                                   │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:25:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:25:43.361106  594320 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:25:43.361469  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361502  594320 out.go:374] Setting ErrFile to fd 2...
	I1014 20:25:43.361522  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361834  594320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:25:43.362250  594320 out.go:368] Setting JSON to false
	I1014 20:25:43.363143  594320 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11294,"bootTime":1760462250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:25:43.363234  594320 start.go:141] virtualization:  
	I1014 20:25:43.366372  594320 out.go:179] * [ha-378189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:25:43.370273  594320 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:25:43.370502  594320 notify.go:220] Checking for updates...
	I1014 20:25:43.376212  594320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:25:43.379153  594320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:43.382007  594320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:25:43.384781  594320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:25:43.387595  594320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:25:43.391018  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:43.391127  594320 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:25:43.417968  594320 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:25:43.418090  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.479361  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.470111217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.479472  594320 docker.go:318] overlay module found
	I1014 20:25:43.482541  594320 out.go:179] * Using the docker driver based on existing profile
	I1014 20:25:43.485265  594320 start.go:305] selected driver: docker
	I1014 20:25:43.485282  594320 start.go:925] validating driver "docker" against &{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.485422  594320 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:25:43.485520  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.540613  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.531183177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.541069  594320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:25:43.541095  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:43.541149  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:43.541196  594320 start.go:349] cluster config:
	{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.546335  594320 out.go:179] * Starting "ha-378189" primary control-plane node in "ha-378189" cluster
	I1014 20:25:43.549190  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:43.552105  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:43.554956  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:43.555008  594320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 20:25:43.555023  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:43.555058  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:43.555118  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:43.555128  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:43.555280  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.574403  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:43.574427  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:43.574448  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:43.574472  594320 start.go:360] acquireMachinesLock for ha-378189: {Name:mk72998cded3d94d4d24b35423c9bb42df68dec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:43.574536  594320 start.go:364] duration metric: took 38.794µs to acquireMachinesLock for "ha-378189"
	I1014 20:25:43.574566  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:43.574582  594320 fix.go:54] fixHost starting: 
	I1014 20:25:43.574842  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.592022  594320 fix.go:112] recreateIfNeeded on ha-378189: state=Stopped err=<nil>
	W1014 20:25:43.592054  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:43.595275  594320 out.go:252] * Restarting existing docker container for "ha-378189" ...
	I1014 20:25:43.595377  594320 cli_runner.go:164] Run: docker start ha-378189
	I1014 20:25:43.835199  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.862801  594320 kic.go:430] container "ha-378189" state is running.
	I1014 20:25:43.863545  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:43.891763  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.891990  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:43.892050  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:43.915656  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:43.915978  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:43.915987  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:43.916996  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:25:47.063750  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.063771  594320 ubuntu.go:182] provisioning hostname "ha-378189"
	I1014 20:25:47.063837  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.081648  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.081949  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.081959  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189 && echo "ha-378189" | sudo tee /etc/hostname
	I1014 20:25:47.237261  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.237361  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.256023  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.256364  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.256388  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:47.404333  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:47.404362  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:47.404398  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:47.404407  594320 provision.go:84] configureAuth start
	I1014 20:25:47.404474  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:47.421674  594320 provision.go:143] copyHostCerts
	I1014 20:25:47.421717  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421746  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:47.421765  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421841  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:47.421929  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421951  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:47.421956  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421989  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:47.422033  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422054  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:47.422061  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422086  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:47.422140  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189 san=[127.0.0.1 192.168.49.2 ha-378189 localhost minikube]
	I1014 20:25:47.717715  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:47.717791  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:47.717832  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.737857  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:47.839808  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:47.839867  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:47.856581  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:47.856687  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:25:47.873767  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:47.873865  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:47.890677  594320 provision.go:87] duration metric: took 486.243045ms to configureAuth
	I1014 20:25:47.890712  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:47.890978  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:47.891083  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.907847  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.908141  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.908155  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:48.237643  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:48.237669  594320 machine.go:96] duration metric: took 4.345669995s to provisionDockerMachine
	I1014 20:25:48.237681  594320 start.go:293] postStartSetup for "ha-378189" (driver="docker")
	I1014 20:25:48.237692  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:48.237753  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:48.237810  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.258809  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.360095  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:48.363412  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:48.363442  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:48.363453  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:48.363504  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:48.363585  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:48.363596  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:48.363699  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:48.371847  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:48.388783  594320 start.go:296] duration metric: took 151.087012ms for postStartSetup
	I1014 20:25:48.388858  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:48.388903  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.405740  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.505335  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:48.509958  594320 fix.go:56] duration metric: took 4.935376593s for fixHost
	I1014 20:25:48.509986  594320 start.go:83] releasing machines lock for "ha-378189", held for 4.93543709s
	I1014 20:25:48.510055  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:48.526206  594320 ssh_runner.go:195] Run: cat /version.json
	I1014 20:25:48.526270  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.526531  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:48.526586  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.548290  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.549796  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.648126  594320 ssh_runner.go:195] Run: systemctl --version
	I1014 20:25:48.747253  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:48.786090  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:48.790656  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:48.790738  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:48.798572  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:48.798598  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:48.798629  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:48.798676  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:48.814126  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:48.827342  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:48.827404  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:48.843057  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:48.856133  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:48.960636  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:49.074665  594320 docker.go:234] disabling docker service ...
	I1014 20:25:49.074816  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:49.089288  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:49.101798  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:49.217149  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:49.331613  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:49.345157  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:49.358849  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:49.358915  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.367341  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:49.367402  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.377099  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.390988  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.399943  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:49.407841  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.416280  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.424463  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.433237  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:49.440777  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:49.447969  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:49.570863  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:49.701384  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:49.701490  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:49.705417  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:49.705523  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:49.708808  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:49.731837  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:49.731976  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.758582  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.794917  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:25:49.797793  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:25:49.814631  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:25:49.818852  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:49.829148  594320 kubeadm.go:883] updating cluster {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:25:49.829306  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:49.829365  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.862281  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.862307  594320 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:25:49.862361  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.887278  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.887305  594320 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:25:49.887314  594320 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:25:49.887406  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:25:49.887491  594320 ssh_runner.go:195] Run: crio config
	I1014 20:25:49.950026  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:49.950049  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:49.950075  594320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:25:49.950098  594320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-378189 NodeName:ha-378189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:25:49.950235  594320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-378189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:25:49.950258  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:25:49.950319  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:25:49.962120  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:49.962220  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:25:49.962284  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:25:49.970701  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:25:49.970828  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:25:49.977995  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:25:49.990750  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:25:50.005887  594320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1014 20:25:50.021284  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:25:50.035924  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:25:50.040106  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:50.050560  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:50.176106  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:25:50.192829  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.2
	I1014 20:25:50.192902  594320 certs.go:195] generating shared ca certs ...
	I1014 20:25:50.192934  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:50.193147  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:25:50.193245  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:25:50.193272  594320 certs.go:257] generating profile certs ...
	I1014 20:25:50.193429  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:25:50.193492  594320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1
	I1014 20:25:50.193549  594320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1014 20:25:51.264129  594320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 ...
	I1014 20:25:51.264211  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1: {Name:mk888fb5869312f9302bb4d116c9798036158a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264486  594320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 ...
	I1014 20:25:51.264521  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1: {Name:mk37f1a7286bb5552e328ef092b8b679d52c9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264668  594320 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt
	I1014 20:25:51.264858  594320 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key
	I1014 20:25:51.265044  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:25:51.265080  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:25:51.265114  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:25:51.265162  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:25:51.265197  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:25:51.265228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:25:51.265275  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:25:51.265318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:25:51.265346  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:25:51.265429  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:25:51.265484  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:25:51.265514  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:25:51.265568  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:25:51.265617  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:25:51.265670  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:25:51.265740  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:51.265797  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.266224  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.266279  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.266869  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:25:51.293705  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:25:51.326310  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:25:51.350664  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:25:51.384349  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:25:51.428733  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:25:51.452875  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:25:51.478413  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:25:51.512863  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:25:51.549423  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:25:51.576302  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:25:51.612732  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:25:51.637495  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:25:51.649332  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:25:51.668042  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674160  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674241  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.728425  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:25:51.739014  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:25:51.750524  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757453  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757551  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.825539  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:25:51.837092  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:25:51.848450  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853823  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853915  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.903510  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:25:51.914011  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:25:51.919570  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:25:51.968390  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:25:52.020296  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:25:52.082590  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:25:52.139873  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:25:52.202457  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:25:52.250943  594320 kubeadm.go:400] StartCluster: {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:52.251111  594320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:25:52.251193  594320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:25:52.305687  594320 cri.go:89] found id: "a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59"
	I1014 20:25:52.305737  594320 cri.go:89] found id: "29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	I1014 20:25:52.305743  594320 cri.go:89] found id: "6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b"
	I1014 20:25:52.305748  594320 cri.go:89] found id: "1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42"
	I1014 20:25:52.305751  594320 cri.go:89] found id: "e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa"
	I1014 20:25:52.305756  594320 cri.go:89] found id: ""
	I1014 20:25:52.305830  594320 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 20:25:52.324825  594320 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:25:52Z" level=error msg="open /run/runc: no such file or directory"
	I1014 20:25:52.324939  594320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:25:52.340043  594320 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:25:52.340101  594320 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:25:52.340184  594320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:25:52.356489  594320 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:52.357142  594320 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-378189" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.357314  594320 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "ha-378189" cluster setting kubeconfig missing "ha-378189" context setting]
	I1014 20:25:52.357700  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.358694  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:25:52.359335  594320 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:25:52.359405  594320 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:25:52.359415  594320 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:25:52.359420  594320 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:25:52.359425  594320 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:25:52.359381  594320 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:25:52.359974  594320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:25:52.376163  594320 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:25:52.376206  594320 kubeadm.go:601] duration metric: took 36.093342ms to restartPrimaryControlPlane
	I1014 20:25:52.376215  594320 kubeadm.go:402] duration metric: took 125.292882ms to StartCluster
	I1014 20:25:52.376231  594320 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.376308  594320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.377241  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.377523  594320 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:25:52.377556  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:25:52.377564  594320 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:25:52.378287  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.384056  594320 out.go:179] * Enabled addons: 
	I1014 20:25:52.386972  594320 addons.go:514] duration metric: took 9.377075ms for enable addons: enabled=[]
	I1014 20:25:52.387045  594320 start.go:246] waiting for cluster config update ...
	I1014 20:25:52.387060  594320 start.go:255] writing updated cluster config ...
	I1014 20:25:52.390584  594320 out.go:203] 
	I1014 20:25:52.393809  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.393941  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.397423  594320 out.go:179] * Starting "ha-378189-m02" control-plane node in "ha-378189" cluster
	I1014 20:25:52.400301  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:52.403231  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:52.406009  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:52.406048  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:52.406156  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:52.406172  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:52.406316  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.406581  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:52.433409  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:52.433441  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:52.433455  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:52.433497  594320 start.go:360] acquireMachinesLock for ha-378189-m02: {Name:mk92416bb80d0c49acbc35d0990b125674d2e01f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:52.433567  594320 start.go:364] duration metric: took 39.312µs to acquireMachinesLock for "ha-378189-m02"
	I1014 20:25:52.433592  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:52.433606  594320 fix.go:54] fixHost starting: m02
	I1014 20:25:52.433897  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.466174  594320 fix.go:112] recreateIfNeeded on ha-378189-m02: state=Stopped err=<nil>
	W1014 20:25:52.466206  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:52.469507  594320 out.go:252] * Restarting existing docker container for "ha-378189-m02" ...
	I1014 20:25:52.469612  594320 cli_runner.go:164] Run: docker start ha-378189-m02
	I1014 20:25:52.947382  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.976894  594320 kic.go:430] container "ha-378189-m02" state is running.
	I1014 20:25:52.977343  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:53.015718  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:53.015986  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:53.016064  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:53.055604  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:53.055958  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:53.055970  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:53.056832  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56096->127.0.0.1:33552: read: connection reset by peer
	I1014 20:25:56.232015  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.232096  594320 ubuntu.go:182] provisioning hostname "ha-378189-m02"
	I1014 20:25:56.232187  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.268557  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.268869  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.268879  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m02 && echo "ha-378189-m02" | sudo tee /etc/hostname
	I1014 20:25:56.465757  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.465917  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.491913  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.492225  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.492241  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:56.672985  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:56.673014  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:56.673042  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:56.673051  594320 provision.go:84] configureAuth start
	I1014 20:25:56.673146  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:56.705878  594320 provision.go:143] copyHostCerts
	I1014 20:25:56.705922  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.705956  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:56.705963  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.706040  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:56.706163  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706183  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:56.706187  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706214  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:56.706263  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706278  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:56.706282  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706307  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:56.706362  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m02 san=[127.0.0.1 192.168.49.3 ha-378189-m02 localhost minikube]
	I1014 20:25:57.297271  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:57.297355  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:57.297408  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.315464  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:57.428975  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:57.429037  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:25:57.452816  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:57.452885  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:57.471294  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:57.471361  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:57.489442  594320 provision.go:87] duration metric: took 816.37455ms to configureAuth
	I1014 20:25:57.489480  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:57.489743  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:57.489867  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.514270  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:57.514577  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:57.514591  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:57.886060  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:57.886080  594320 machine.go:96] duration metric: took 4.870076381s to provisionDockerMachine
	I1014 20:25:57.886091  594320 start.go:293] postStartSetup for "ha-378189-m02" (driver="docker")
	I1014 20:25:57.886102  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:57.886163  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:57.886208  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.904127  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.005798  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:58.011149  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:58.011183  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:58.011195  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:58.011261  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:58.011342  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:58.011354  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:58.011454  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:58.019728  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:58.038734  594320 start.go:296] duration metric: took 152.626218ms for postStartSetup
	I1014 20:25:58.038827  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:58.038875  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.056476  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.157712  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:58.162651  594320 fix.go:56] duration metric: took 5.729042528s for fixHost
	I1014 20:25:58.162676  594320 start.go:83] releasing machines lock for "ha-378189-m02", held for 5.729094813s
	I1014 20:25:58.162747  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:58.183647  594320 out.go:179] * Found network options:
	I1014 20:25:58.186478  594320 out.go:179]   - NO_PROXY=192.168.49.2
	W1014 20:25:58.189324  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:25:58.189372  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:25:58.189452  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:58.189507  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.189778  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:58.189920  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.209437  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.218677  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.415461  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:58.420177  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:58.420296  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:58.428278  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:58.428300  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:58.428405  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:58.428468  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:58.444844  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:58.457892  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:58.457983  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:58.475762  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:58.489635  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:58.621200  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:58.796095  594320 docker.go:234] disabling docker service ...
	I1014 20:25:58.796209  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:58.827751  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:58.844957  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:59.096618  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:59.311841  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:59.331768  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:59.354924  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:59.355038  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.367271  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:59.367384  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.382002  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.395622  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.410728  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:59.426726  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.444708  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.458277  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.469637  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:59.481336  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:59.491928  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:59.720255  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:59.958409  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:59.958540  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:59.962293  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:59.962423  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:59.966088  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:59.991722  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:59.991842  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.113230  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.165429  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:26:00.170886  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:26:00.174530  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:26:00.218865  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:26:00.228535  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:00.249249  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:26:00.249528  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:00.249874  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:26:00.286648  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:26:00.286974  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.3
	I1014 20:26:00.287300  594320 certs.go:195] generating shared ca certs ...
	I1014 20:26:00.287327  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:26:00.287479  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:26:00.287528  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:26:00.287537  594320 certs.go:257] generating profile certs ...
	I1014 20:26:00.287629  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:26:00.287704  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.31763e79
	I1014 20:26:00.287740  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:26:00.287749  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:26:00.287763  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:26:00.287775  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:26:00.287791  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:26:00.287805  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:26:00.287818  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:26:00.287829  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:26:00.287851  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:26:00.287914  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:26:00.287952  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:26:00.287961  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:26:00.287988  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:26:00.288014  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:26:00.288040  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:26:00.288093  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:00.288127  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.288141  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.288152  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.288213  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:26:00.323762  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:26:00.428739  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:26:00.433364  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:26:00.442968  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:26:00.447387  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:26:00.457017  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:26:00.461913  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:26:00.472168  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:26:00.476286  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:26:00.486124  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:26:00.490540  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:26:00.499485  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:26:00.503069  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:26:00.511944  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:26:00.532142  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:26:00.549917  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:26:00.568202  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:26:00.586156  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:26:00.603803  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:26:00.621171  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:26:00.638805  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:26:00.656530  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:26:00.674792  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:26:00.693738  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:26:00.710998  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:26:00.723268  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:26:00.736291  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:26:00.748826  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:26:00.764190  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:26:00.786939  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:26:00.807509  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:26:00.822235  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:26:00.829716  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:26:00.839312  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844086  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844196  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.887371  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:26:00.895171  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:26:00.903441  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907140  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907258  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.947880  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:26:00.955732  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:26:00.964076  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967857  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967921  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:01.009173  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:26:01.017414  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:26:01.021430  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:26:01.062823  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:26:01.103976  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:26:01.145434  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:26:01.188131  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:26:01.229876  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:26:01.271831  594320 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1014 20:26:01.271953  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:26:01.271983  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:26:01.272034  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:26:01.286472  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:26:01.286553  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:26:01.286643  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:26:01.295904  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:26:01.296015  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:26:01.304007  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:26:01.318122  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:26:01.331599  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:26:01.348453  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:26:01.353197  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:01.364628  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.509428  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.524168  594320 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:26:01.524671  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:01.530763  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:26:01.533842  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.682729  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.697778  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:26:01.697855  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:26:01.698087  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m02" to be "Ready" ...
	W1014 20:26:11.701290  594320 node_ready.go:55] error getting node "ha-378189-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02": net/http: TLS handshake timeout
	I1014 20:26:15.977480  594320 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02"
	I1014 20:26:19.960495  594320 node_ready.go:49] node "ha-378189-m02" is "Ready"
	I1014 20:26:19.960524  594320 node_ready.go:38] duration metric: took 18.262413251s for node "ha-378189-m02" to be "Ready" ...
	I1014 20:26:19.960537  594320 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:26:19.960604  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.461531  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.961464  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.461540  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.961480  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.461610  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.961486  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.977774  594320 api_server.go:72] duration metric: took 21.453514103s to wait for apiserver process to appear ...
	I1014 20:26:22.977840  594320 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:26:22.977876  594320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:26:22.986624  594320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:26:22.988140  594320 api_server.go:141] control plane version: v1.34.1
	I1014 20:26:22.988198  594320 api_server.go:131] duration metric: took 10.336359ms to wait for apiserver health ...
	I1014 20:26:22.988222  594320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:26:23.017914  594320 system_pods.go:59] 26 kube-system pods found
	I1014 20:26:23.018013  594320 system_pods.go:61] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018041  594320 system_pods.go:61] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018081  594320 system_pods.go:61] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.018111  594320 system_pods.go:61] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.018134  594320 system_pods.go:61] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.018156  594320 system_pods.go:61] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.018187  594320 system_pods.go:61] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.018213  594320 system_pods.go:61] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.018237  594320 system_pods.go:61] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.018263  594320 system_pods.go:61] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.018296  594320 system_pods.go:61] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.018325  594320 system_pods.go:61] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.018353  594320 system_pods.go:61] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.018378  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.018412  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.018440  594320 system_pods.go:61] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.018463  594320 system_pods.go:61] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.018488  594320 system_pods.go:61] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.018518  594320 system_pods.go:61] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.018546  594320 system_pods.go:61] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.018569  594320 system_pods.go:61] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.018590  594320 system_pods.go:61] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.018622  594320 system_pods.go:61] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.018648  594320 system_pods.go:61] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.018672  594320 system_pods.go:61] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.018696  594320 system_pods.go:61] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.018729  594320 system_pods.go:74] duration metric: took 30.48728ms to wait for pod list to return data ...
	I1014 20:26:23.018756  594320 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:26:23.026171  594320 default_sa.go:45] found service account: "default"
	I1014 20:26:23.026235  594320 default_sa.go:55] duration metric: took 7.458111ms for default service account to be created ...
	I1014 20:26:23.026258  594320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:26:23.039389  594320 system_pods.go:86] 26 kube-system pods found
	I1014 20:26:23.039477  594320 system_pods.go:89] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039504  594320 system_pods.go:89] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039543  594320 system_pods.go:89] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.039568  594320 system_pods.go:89] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.039589  594320 system_pods.go:89] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.039615  594320 system_pods.go:89] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.039649  594320 system_pods.go:89] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.039674  594320 system_pods.go:89] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.039695  594320 system_pods.go:89] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.039724  594320 system_pods.go:89] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.039755  594320 system_pods.go:89] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.039784  594320 system_pods.go:89] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.039823  594320 system_pods.go:89] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.039845  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.039882  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.039900  594320 system_pods.go:89] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.039921  594320 system_pods.go:89] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.039955  594320 system_pods.go:89] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.039975  594320 system_pods.go:89] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.039993  594320 system_pods.go:89] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.040012  594320 system_pods.go:89] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.040031  594320 system_pods.go:89] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.040060  594320 system_pods.go:89] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.040088  594320 system_pods.go:89] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.040111  594320 system_pods.go:89] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.040136  594320 system_pods.go:89] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.040170  594320 system_pods.go:126] duration metric: took 13.891803ms to wait for k8s-apps to be running ...
	I1014 20:26:23.040201  594320 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:26:23.040293  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:26:23.063946  594320 system_svc.go:56] duration metric: took 23.73711ms WaitForService to wait for kubelet
	I1014 20:26:23.064017  594320 kubeadm.go:586] duration metric: took 21.539761018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:26:23.064051  594320 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:26:23.082262  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082345  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082370  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082388  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082423  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082450  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082472  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082494  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082525  594320 node_conditions.go:105] duration metric: took 18.445212ms to run NodePressure ...
	I1014 20:26:23.082553  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:26:23.082589  594320 start.go:255] writing updated cluster config ...
	I1014 20:26:23.086128  594320 out.go:203] 
	I1014 20:26:23.089286  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:23.089471  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.092927  594320 out.go:179] * Starting "ha-378189-m03" control-plane node in "ha-378189" cluster
	I1014 20:26:23.096535  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:26:23.099560  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:26:23.102321  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:26:23.102371  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:26:23.102397  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:26:23.102501  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:26:23.102517  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:26:23.102654  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.136476  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:26:23.136495  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:26:23.136508  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:26:23.136531  594320 start.go:360] acquireMachinesLock for ha-378189-m03: {Name:mk212b55d7ba2de9e9f1edd93e6a58b4ed6f2ea7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:26:23.136583  594320 start.go:364] duration metric: took 37.113µs to acquireMachinesLock for "ha-378189-m03"
	I1014 20:26:23.136602  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:26:23.136607  594320 fix.go:54] fixHost starting: m03
	I1014 20:26:23.136861  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.180150  594320 fix.go:112] recreateIfNeeded on ha-378189-m03: state=Stopped err=<nil>
	W1014 20:26:23.180176  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:26:23.183408  594320 out.go:252] * Restarting existing docker container for "ha-378189-m03" ...
	I1014 20:26:23.183496  594320 cli_runner.go:164] Run: docker start ha-378189-m03
	I1014 20:26:23.583316  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.617146  594320 kic.go:430] container "ha-378189-m03" state is running.
	I1014 20:26:23.617687  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:23.654101  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.654351  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:26:23.654417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:23.679159  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:23.679663  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:23.679677  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:26:23.680477  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:26:26.929139  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:26.929205  594320 ubuntu.go:182] provisioning hostname "ha-378189-m03"
	I1014 20:26:26.929306  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:26.953169  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:26.953484  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:26.953497  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m03 && echo "ha-378189-m03" | sudo tee /etc/hostname
	I1014 20:26:27.366006  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:27.366194  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:27.395992  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:27.396293  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:27.396309  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:26:27.693852  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:26:27.693921  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:26:27.693958  594320 ubuntu.go:190] setting up certificates
	I1014 20:26:27.694000  594320 provision.go:84] configureAuth start
	I1014 20:26:27.694087  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:27.718087  594320 provision.go:143] copyHostCerts
	I1014 20:26:27.718133  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718165  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:26:27.718172  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718249  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:26:27.718323  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718348  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:26:27.718353  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718380  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:26:27.718424  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718444  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:26:27.718452  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718478  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:26:27.718527  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m03 san=[127.0.0.1 192.168.49.4 ha-378189-m03 localhost minikube]
	I1014 20:26:28.885539  594320 provision.go:177] copyRemoteCerts
	I1014 20:26:28.885609  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:26:28.885857  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:28.909377  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:29.092351  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:26:29.092437  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:26:29.150396  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:26:29.150575  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:26:29.212427  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:26:29.212488  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:26:29.291296  594320 provision.go:87] duration metric: took 1.597262612s to configureAuth
	I1014 20:26:29.291319  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:26:29.291554  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:29.291658  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.325450  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:29.325769  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:29.325785  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:26:29.937523  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:26:29.937544  594320 machine.go:96] duration metric: took 6.283184743s to provisionDockerMachine
	I1014 20:26:29.937556  594320 start.go:293] postStartSetup for "ha-378189-m03" (driver="docker")
	I1014 20:26:29.937573  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:26:29.937654  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:26:29.937717  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.965387  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.103245  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:26:30.108041  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:26:30.108068  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:26:30.108080  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:26:30.108144  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:26:30.108221  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:26:30.108228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:26:30.108351  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:26:30.118723  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:30.147273  594320 start.go:296] duration metric: took 209.701827ms for postStartSetup
	I1014 20:26:30.147363  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:26:30.147417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.167399  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.286490  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:26:30.294616  594320 fix.go:56] duration metric: took 7.158001675s for fixHost
	I1014 20:26:30.294649  594320 start.go:83] releasing machines lock for "ha-378189-m03", held for 7.158058317s
	I1014 20:26:30.294720  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:30.332338  594320 out.go:179] * Found network options:
	I1014 20:26:30.335270  594320 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1014 20:26:30.338218  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338252  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338276  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338286  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:26:30.338361  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:26:30.338399  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.338666  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:26:30.338727  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.371445  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.378598  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.685955  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:26:30.710071  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:26:30.710156  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:26:30.762990  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:26:30.763023  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:26:30.763054  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:26:30.763113  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:26:30.789796  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:26:30.813445  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:26:30.813530  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:26:30.839481  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:26:30.861859  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:26:31.219302  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:26:31.622468  594320 docker.go:234] disabling docker service ...
	I1014 20:26:31.622554  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:26:31.669472  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:26:31.699352  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:26:32.097354  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:26:32.482606  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:26:32.523674  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:26:32.577860  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:26:32.577931  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.625402  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:26:32.625482  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.642162  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.655785  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.690980  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:26:32.710947  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.738596  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.764228  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.794648  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:26:32.819907  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:26:32.841183  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:33.263701  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:28:03.725730  594320 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.461946041s)
	I1014 20:28:03.725755  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:28:03.725818  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:28:03.730985  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:28:03.731051  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:28:03.734613  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:28:03.777080  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:28:03.777171  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.808074  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.842942  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:28:03.845779  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:28:03.848732  594320 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1014 20:28:03.851762  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:28:03.869115  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:28:03.872847  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:03.883220  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:28:03.883459  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:03.883722  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:28:03.902527  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:28:03.902796  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.4
	I1014 20:28:03.902811  594320 certs.go:195] generating shared ca certs ...
	I1014 20:28:03.902825  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:28:03.902952  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:28:03.902998  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:28:03.903010  594320 certs.go:257] generating profile certs ...
	I1014 20:28:03.903083  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:28:03.903150  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.02897361
	I1014 20:28:03.903194  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:28:03.903292  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:28:03.903318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:28:03.903337  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:28:03.903356  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:28:03.903373  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:28:03.903388  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:28:03.903405  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:28:03.903422  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:28:03.903477  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:28:03.903513  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:28:03.903525  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:28:03.903550  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:28:03.903577  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:28:03.903603  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:28:03.903647  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:28:03.903678  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:03.903693  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:28:03.903709  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:28:03.903766  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:28:03.921249  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:28:04.020701  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:28:04.026586  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:28:04.037048  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:28:04.040988  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:28:04.050562  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:28:04.054508  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:28:04.063039  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:28:04.066687  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:28:04.075046  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:28:04.079077  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:28:04.087651  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:28:04.091714  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:28:04.099957  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:28:04.120044  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:28:04.141157  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:28:04.160012  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:28:04.182538  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:28:04.206993  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:28:04.231475  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:28:04.253537  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:28:04.275116  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:28:04.294127  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:28:04.313010  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:28:04.337638  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:28:04.350377  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:28:04.365979  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:28:04.380227  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:28:04.402519  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:28:04.428480  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:28:04.444300  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:28:04.465409  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:28:04.472158  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:28:04.481746  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.485919  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.486029  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.534226  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:28:04.542709  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:28:04.552210  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.557889  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.558010  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.600600  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:28:04.608591  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:28:04.617463  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621083  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621172  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.663633  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:28:04.671866  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:28:04.675585  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:28:04.717923  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:28:04.759364  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:28:04.801264  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:28:04.842553  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:28:04.888303  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:28:04.930552  594320 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1014 20:28:04.930687  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:28:04.930727  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:28:04.930788  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:28:04.948125  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:28:04.948234  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:28:04.948301  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:28:04.956444  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:28:04.956519  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:28:04.964479  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:28:04.980043  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:28:04.994905  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:28:05.014277  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:28:05.019048  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:05.032269  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.165397  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.183614  594320 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:28:05.184053  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:05.189131  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:28:05.192051  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.332393  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.346845  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:28:05.346924  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:28:05.347150  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m03" to be "Ready" ...
	W1014 20:28:07.351440  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:09.352350  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:11.851687  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:13.851750  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:16.351273  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:18.351732  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:20.354256  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:22.851172  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:24.851281  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:27.351420  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:29.851348  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:32.351357  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:34.851229  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:36.852278  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:39.352122  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:41.851851  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:44.351532  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:46.850950  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:49.352299  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:51.850817  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:53.851323  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:56.350480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:58.351469  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:00.361289  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:02.851458  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:05.350476  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:07.350815  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:09.351365  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:11.851059  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:13.851563  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:16.351339  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:18.850245  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:20.855138  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:23.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:25.852768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:28.351284  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:30.351537  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:32.851539  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:35.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:37.850912  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:39.851077  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:41.851598  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:44.351457  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:46.351534  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:48.850734  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:50.851231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:52.851432  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:55.351286  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:57.351688  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:59.851092  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:01.852021  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:04.350799  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:06.851328  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:09.351536  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:11.353512  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:13.851897  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:15.852000  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:18.351490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:20.851616  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:23.351164  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:25.351400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:27.352193  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:29.850751  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:31.851102  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:34.350486  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:36.351583  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:38.851490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:41.350742  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:43.357724  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:45.850854  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:47.851206  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:50.350789  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:52.850626  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:54.851276  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:57.350761  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:59.351538  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:01.851391  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:04.350763  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:06.850386  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:08.851237  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:11.351648  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:13.850996  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:15.851345  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:18.351324  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:20.355277  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:22.851213  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:25.350735  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:27.850574  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:29.856637  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:32.350493  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:34.351300  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:36.353689  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:38.851719  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:40.852010  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:43.351491  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:45.354360  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:47.851118  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:49.851408  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:52.351137  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:54.851402  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:57.350516  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:59.351590  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:01.851121  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:04.350628  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:06.350691  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:08.351228  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:10.851190  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:13.351117  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:15.850576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:17.851400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:20.351576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:22.851057  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:25.351322  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:27.355638  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:29.851145  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:31.851487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:34.352259  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:36.850522  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:39.351831  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:41.850768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:44.350717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:46.351296  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:48.850913  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:50.851023  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:53.350835  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:55.353231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:57.851446  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:00.354501  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:02.850997  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:05.350786  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:07.350850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:09.351182  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:11.850869  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:13.851809  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:16.351480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:18.850797  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:20.851760  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:23.351587  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:25.850979  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:28.351624  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:30.851397  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:33.351390  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:35.351573  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:37.850906  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:40.351625  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:42.852099  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:45.352080  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:47.850966  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:49.851115  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:51.851586  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:54.350406  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:56.350741  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:58.850717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:00.850850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:02.851136  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	I1014 20:34:05.347342  594320 node_ready.go:38] duration metric: took 6m0.000162726s for node "ha-378189-m03" to be "Ready" ...
	I1014 20:34:05.350610  594320 out.go:203] 
	W1014 20:34:05.353525  594320 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:34:05.353550  594320 out.go:285] * 
	W1014 20:34:05.355696  594320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:34:05.358382  594320 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:21 ha-378189 crio[666]: time="2025-10-14T20:26:21.589653552Z" level=info msg="Started container" PID=1152 containerID=b52d43b08a0f0f3d58627b50e98ad91edb2949ba45cac9addb1a701ea81a7ab6 description=default/busybox-7b57f96db7-q27pj/busybox id=3a27ea99-be07-47d1-af55-67ce0d5512da name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6cb5d57424988afbfa1dc5617cd0c429ceb241689cd63972a803390cece10e9
	Oct 14 20:26:52 ha-378189 conmon[1156]: conmon 363f25aac2f0263c8db4 <ninfo>: container 1159 exited with status 1
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.625980521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fe72c90-cb4c-4e12-b9ae-04fb5a121866 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.627066339Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fbaeb554-6cf4-4d97-8ba3-e4435db48ee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628061243Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628282768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635415286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635711348Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/passwd: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.63580049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/group: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.636120757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.66493445Z" level=info msg="Created container 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.666015739Z" level=info msg="Starting container: 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313" id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.669895504Z" level=info msg="Started container" PID=1401 containerID=80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313 description=kube-system/storage-provisioner/storage-provisioner id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=0baae86d64cea1e60128930d082a0936798e1ae0fa7d9cc62159fa6df0e1584f
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.588845238Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592539237Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592598881Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592618352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59592804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.595974687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59599566Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599034558Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599072967Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599095368Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602072653Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602106582Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	80de2a13d9d5b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   0baae86d64cea       storage-provisioner                 kube-system
	ada8dd32f25e1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   c1d0c1569b9ee       coredns-66bc5c9577-rdfdx            kube-system
	626c2287b0728       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   7d46cbab6384c       coredns-66bc5c9577-fqrq6            kube-system
	363f25aac2f02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   0baae86d64cea       storage-provisioner                 kube-system
	1e6277796c71e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   192c9d36b9c8a       kube-proxy-vcfj7                    kube-system
	b52d43b08a0f0       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d6cb5d5742498       busybox-7b57f96db7-q27pj            default
	19b8d6e7d9d03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   bfe53b6aba234       kindnet-xs5v9                       kube-system
	d232b32e38df9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	750164802f535       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Running             kube-apiserver            2                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	a140acf9afcdd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	29d51a80b14a0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	6eb0610f607e1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d05d495ee5973       kube-vip-ha-378189                  kube-system
	1df4877419cf7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   0e3b61d4f5605       kube-scheduler-ha-378189            kube-system
	e43d3be1d91f3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   eaed44524556d       etcd-ha-378189                      kube-system
	
	
	==> coredns [626c2287b07287457eac7f6e1b44647d8632ec7bb0a670aa408f969a93c332af] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53276 - 13110 "HINFO IN 6427901219273669792.4527693482397380301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022504393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ada8dd32f25e1279de4ffb5acc3114c55c10d41b5d58fa972ded2662fadec5c9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40316 - 29901 "HINFO IN 5531300076892638066.4168205942486472486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013682922s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-378189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:33:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:33:48 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:33:48 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:33:48 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:33:48 +0000   Tue, 14 Oct 2025 20:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-378189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0978d9c8ec5d4898bfbd451e1fb9d4bc
	  System UUID:                3bbab8a9-98e5-49cd-8237-de9f16fcba7a
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q27pj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-fqrq6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-rdfdx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-378189                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-xs5v9                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-378189             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-378189    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-vcfj7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-378189             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-378189                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-378189 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           8m46s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   Starting                 8m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m17s (x8 over 8m17s)  kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           7m33s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	
	
	Name:               ha-378189-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-378189-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 efd151379e5c4b2ebd4e80e64c360eed
	  System UUID:                3e1bc125-e145-47d1-8100-88ab5dc3051b
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tkxmq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-378189-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qjxhf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-378189-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-378189-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-2cc9g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-378189-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-378189-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m29s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   NodeHasSufficientPID     9m24s (x8 over 9m24s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             8m50s                  node-controller  Node ha-378189-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m46s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   Starting                 8m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m13s (x8 over 8m13s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m13s (x8 over 8m13s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m13s (x8 over 8m13s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           7m33s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	
	
	Name:               ha-378189-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_22_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:22:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:25:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 14 Oct 2025 20:25:12 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 14 Oct 2025 20:25:12 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 14 Oct 2025 20:25:12 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 14 Oct 2025 20:25:12 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-378189-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 48e6fd38e196493e9e29d82621302286
	  System UUID:                e4332518-bd63-4950-a31e-985f19fd1e46
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-n6bnr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-378189-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-k6rbj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-378189-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-378189-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qhldf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-378189-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-378189-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  RegisteredNode  8m46s  node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  RegisteredNode  7m39s  node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  RegisteredNode  7m33s  node-controller  Node ha-378189-m03 event: Registered Node ha-378189-m03 in Controller
	  Normal  NodeNotReady    6m49s  node-controller  Node ha-378189-m03 status is now: NodeNotReady
	
	
	Name:               ha-378189-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_23_23_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:23:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:25:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-378189-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d1e8ea2da5b40c08e7704d42f98e14c
	  System UUID:                a2a231a7-55aa-4186-9e9a-7dad075b6ff1
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54mnm       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-dkq7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-378189-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m46s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m39s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m33s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeNotReady             6m49s              node-controller  Node ha-378189-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	[Oct14 20:05] overlayfs: idmapped layers are currently not supported
	[Oct14 20:06] overlayfs: idmapped layers are currently not supported
	[Oct14 20:19] overlayfs: idmapped layers are currently not supported
	[Oct14 20:20] overlayfs: idmapped layers are currently not supported
	[Oct14 20:21] overlayfs: idmapped layers are currently not supported
	[Oct14 20:23] overlayfs: idmapped layers are currently not supported
	[Oct14 20:24] overlayfs: idmapped layers are currently not supported
	[Oct14 20:25] overlayfs: idmapped layers are currently not supported
	[  +4.162636] overlayfs: idmapped layers are currently not supported
	[Oct14 20:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa] <==
	{"level":"warn","ts":"2025-10-14T20:27:48.401053Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:48.401127Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:51.601812Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:51.601799Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:52.402389Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:52.402442Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:56.404352Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:56.404408Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:56.602450Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:27:56.602437Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:00.406786Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:00.406857Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:01.603612Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:01.603623Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:04.408489Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:04.408548Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"77cd6ee1d0a2b5a3","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-14T20:28:06.171485Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"77cd6ee1d0a2b5a3","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-14T20:28:06.171624Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.171665Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.190047Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"77cd6ee1d0a2b5a3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-14T20:28:06.190169Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.222402Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.223226Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 20:34:07 up  3:16,  0 user,  load average: 1.39, 1.45, 1.60
	Linux ha-378189 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19b8d6e7d9d0342c76a1e404b13de2161ab3c4bf2f293b51851e009a1b5f7f08] <==
	I1014 20:33:31.582793       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:33:41.582290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:33:41.582360       1 main.go:301] handling current node
	I1014 20:33:41.582376       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:33:41.582382       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:33:41.582550       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:33:41.582564       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:41.582642       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:41.582652       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:33:51.588504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:33:51.588651       1 main.go:301] handling current node
	I1014 20:33:51.588720       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:33:51.588753       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:33:51.588972       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:33:51.589013       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:51.589196       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:51.589242       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:34:01.588534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:34:01.588567       1 main.go:301] handling current node
	I1014 20:34:01.588586       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:34:01.588593       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:34:01.588751       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:34:01.588779       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:34:01.588872       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:34:01.588899       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [750164802f535723e47d9b886929b01a0d9c918009411e14769f1b1bd030b93a] <==
	I1014 20:26:20.042627       1 policy_source.go:240] refreshing policies
	I1014 20:26:20.079160       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 20:26:20.093566       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 20:26:20.093627       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:26:20.112990       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:26:20.113438       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 20:26:20.113624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:26:20.119480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 20:26:20.119521       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 20:26:20.119613       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 20:26:20.119654       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 20:26:20.121370       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 20:26:20.126606       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 20:26:20.130342       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:26:20.136886       1 cache.go:39] Caches are synced for autoregister controller
	W1014 20:26:20.207177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1014 20:26:20.213176       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:26:20.235225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1014 20:26:20.242018       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1014 20:26:20.523492       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:26:20.826556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 20:26:22.469181       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1014 20:26:28.935001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:26:29.018133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:26:36.974045       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59] <==
	I1014 20:25:51.233702       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1014 20:25:54.896784       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1014 20:25:54.896886       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1014 20:25:54.896942       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1014 20:25:54.896973       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1014 20:25:54.897001       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1014 20:25:54.897033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1014 20:25:54.897063       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1014 20:25:54.897093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1014 20:25:54.897121       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1014 20:25:54.897149       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1014 20:25:54.897178       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1014 20:25:54.897206       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1014 20:25:54.922112       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1014 20:25:54.935912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1014 20:25:54.937765       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1014 20:25:54.962374       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:25:54.968742       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1014 20:25:54.969218       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 20:25:54.969513       1 instance.go:239] Using reconciler: lease
	W1014 20:25:54.971617       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.921833       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.925037       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.970811       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1014 20:26:14.970821       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab] <==
	I1014 20:25:54.406785       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:25:55.132646       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1014 20:25:55.132731       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:25:55.147018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1014 20:25:55.147823       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 20:25:55.148017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:25:55.148140       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1014 20:26:20.022395       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d232b32e38df9f91bd238189f93a91f9a75b3d07dfb77d79c8de6932a7885216] <==
	I1014 20:26:28.618601       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 20:26:28.618752       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m02"
	I1014 20:26:28.618829       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m03"
	I1014 20:26:28.618890       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m04"
	I1014 20:26:28.618954       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189"
	I1014 20:26:28.604585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.619106       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:26:28.602874       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:26:28.629252       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 20:26:28.602883       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 20:26:28.630235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 20:26:28.602893       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:26:28.630543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 20:26:28.603399       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 20:26:28.556415       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:26:28.637223       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 20:26:28.682627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.682751       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:26:28.741655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.753923       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.754530       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:26:28.754582       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:26:28.808506       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1014 20:27:02.690518       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 20:32:20.021441       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-n6bnr"
	
	
	==> kube-proxy [1e6277796c71ea2156b2442bc5bccf388b38d67c931d7ebfd1af22d8148d7f4a] <==
	I1014 20:26:22.368598       1 server_linux.go:53] "Using iptables proxy"
	I1014 20:26:23.150050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:26:23.250265       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:26:23.250370       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 20:26:23.250482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:26:24.166639       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 20:26:24.166712       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:26:24.319706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:26:24.320064       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:26:24.320240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:24.338325       1 config.go:200] "Starting service config controller"
	I1014 20:26:24.338693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:26:24.338764       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:26:24.338796       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:26:24.338833       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:26:24.338859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:26:24.341262       1 config.go:309] "Starting node config controller"
	I1014 20:26:24.342084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:26:24.342169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:26:24.440637       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:26:24.440752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:26:24.440814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42] <==
	I1014 20:25:54.629182       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:26:05.981471       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1014 20:26:05.981507       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:26:05.981516       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:26:19.941274       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:26:19.947786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:19.950118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.952381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.954023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:26:19.955708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:26:20.152543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.242979     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-378189\" already exists" pod="kube-system/kube-apiserver-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.243174     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.266372     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-378189\" already exists" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.266545     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.277022     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-378189\" already exists" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.335489     795 apiserver.go:52] "Watching apiserver"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.341320     795 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-378189" podUID="58e47d49-6119-454a-8ef7-5ff6d46dbc8e"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377684     795 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377713     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.380550     795 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e47d49-6119-454a-8ef7-5ff6d46dbc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"cri-o://6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.1\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T20:25:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-10-14T20:25:50Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-378189\": pods \"kube-vip-ha-378189\" not found" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.384848     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6eb6d052fb3fa4b9cbf90736fdb46e" path="/var/lib/kubelet/pods/5a6eb6d052fb3fa4b9cbf90736fdb46e/volumes"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.439465     795 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474075     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-cni-cfg\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474378     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-xtables-lock\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474711     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-lib-modules\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474808     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-lib-modules\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474885     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6cd440a2-0b51-4f78-8285-48bf107cf288-tmp\") pod \"storage-provisioner\" (UID: \"6cd440a2-0b51-4f78-8285-48bf107cf288\") " pod="kube-system/storage-provisioner"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474996     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-xtables-lock\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.489643     795 scope.go:117] "RemoveContainer" containerID="29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.579639     795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 20:26:21 ha-378189 kubelet[795]: W1014 20:26:20.998774     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio-c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0 WatchSource:0}: Error finding container c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0: Status 404 returned error can't find the container with id c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0
	Oct 14 20:26:21 ha-378189 kubelet[795]: I1014 20:26:21.651722     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-378189" podStartSLOduration=1.6517024089999999 podStartE2EDuration="1.651702409s" podCreationTimestamp="2025-10-14 20:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 20:26:21.43011465 +0000 UTC m=+31.232674378" watchObservedRunningTime="2025-10-14 20:26:21.651702409 +0000 UTC m=+31.454262129"
	Oct 14 20:26:50 ha-378189 kubelet[795]: E1014 20:26:50.327193     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc"
	Oct 14 20:26:50 ha-378189 kubelet[795]: I1014 20:26:50.327265     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist"
	Oct 14 20:26:52 ha-378189 kubelet[795]: I1014 20:26:52.625158     795 scope.go:117] "RemoveContainer" containerID="363f25aac2f0263c8db462f13cef072fd6f75d27229590360981202746ac71ab"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-378189 -n ha-378189
helpers_test.go:269: (dbg) Run:  kubectl --context ha-378189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q7g99
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99
helpers_test.go:290: (dbg) kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q7g99
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mll6g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mll6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (532.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 node delete m03 --alsologtostderr -v 5: (5.477810148s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: exit status 7 (611.456923ms)

                                                
                                                
-- stdout --
	ha-378189
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378189-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378189-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:34:13.868528  600401 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:34:13.868793  600401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:34:13.868824  600401 out.go:374] Setting ErrFile to fd 2...
	I1014 20:34:13.868844  600401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:34:13.869261  600401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:34:13.869607  600401 out.go:368] Setting JSON to false
	I1014 20:34:13.869751  600401 mustload.go:65] Loading cluster: ha-378189
	I1014 20:34:13.869820  600401 notify.go:220] Checking for updates...
	I1014 20:34:13.870273  600401 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:34:13.870309  600401 status.go:174] checking status of ha-378189 ...
	I1014 20:34:13.871008  600401 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:34:13.893192  600401 status.go:371] ha-378189 host status = "Running" (err=<nil>)
	I1014 20:34:13.893221  600401 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:34:13.893565  600401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:34:13.919296  600401 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:34:13.919739  600401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:34:13.919799  600401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:34:13.938124  600401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:34:14.050308  600401 ssh_runner.go:195] Run: systemctl --version
	I1014 20:34:14.057673  600401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:34:14.072197  600401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:34:14.137175  600401 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 20:34:14.125968857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:34:14.137760  600401 kubeconfig.go:125] found "ha-378189" server: "https://192.168.49.254:8443"
	I1014 20:34:14.137794  600401 api_server.go:166] Checking apiserver status ...
	I1014 20:34:14.137835  600401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:34:14.150654  600401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1052/cgroup
	I1014 20:34:14.159367  600401 api_server.go:182] apiserver freezer: "3:freezer:/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio/crio-750164802f535723e47d9b886929b01a0d9c918009411e14769f1b1bd030b93a"
	I1014 20:34:14.159445  600401 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio/crio-750164802f535723e47d9b886929b01a0d9c918009411e14769f1b1bd030b93a/freezer.state
	I1014 20:34:14.167398  600401 api_server.go:204] freezer state: "THAWED"
	I1014 20:34:14.167444  600401 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 20:34:14.175890  600401 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 20:34:14.175920  600401 status.go:463] ha-378189 apiserver status = Running (err=<nil>)
	I1014 20:34:14.175932  600401 status.go:176] ha-378189 status: &{Name:ha-378189 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:34:14.175949  600401 status.go:174] checking status of ha-378189-m02 ...
	I1014 20:34:14.176290  600401 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:34:14.196184  600401 status.go:371] ha-378189-m02 host status = "Running" (err=<nil>)
	I1014 20:34:14.196207  600401 host.go:66] Checking if "ha-378189-m02" exists ...
	I1014 20:34:14.196565  600401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:34:14.222483  600401 host.go:66] Checking if "ha-378189-m02" exists ...
	I1014 20:34:14.222826  600401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:34:14.222936  600401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:34:14.242489  600401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:34:14.346517  600401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:34:14.360735  600401 kubeconfig.go:125] found "ha-378189" server: "https://192.168.49.254:8443"
	I1014 20:34:14.360766  600401 api_server.go:166] Checking apiserver status ...
	I1014 20:34:14.360808  600401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:34:14.373095  600401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	I1014 20:34:14.382032  600401 api_server.go:182] apiserver freezer: "3:freezer:/docker/244d80a09b7991b02e46515ea0da66ae5e1042472894182dad2e1ef56b67ec2c/crio/crio-de6dd5a4ba787f226e65828f2ba56c4e42d82c99fc6875292f25e905d733118b"
	I1014 20:34:14.382111  600401 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/244d80a09b7991b02e46515ea0da66ae5e1042472894182dad2e1ef56b67ec2c/crio/crio-de6dd5a4ba787f226e65828f2ba56c4e42d82c99fc6875292f25e905d733118b/freezer.state
	I1014 20:34:14.390985  600401 api_server.go:204] freezer state: "THAWED"
	I1014 20:34:14.391058  600401 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 20:34:14.400569  600401 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 20:34:14.400609  600401 status.go:463] ha-378189-m02 apiserver status = Running (err=<nil>)
	I1014 20:34:14.400619  600401 status.go:176] ha-378189-m02 status: &{Name:ha-378189-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:34:14.400646  600401 status.go:174] checking status of ha-378189-m04 ...
	I1014 20:34:14.401019  600401 cli_runner.go:164] Run: docker container inspect ha-378189-m04 --format={{.State.Status}}
	I1014 20:34:14.423965  600401 status.go:371] ha-378189-m04 host status = "Stopped" (err=<nil>)
	I1014 20:34:14.424106  600401 status.go:384] host is not running, skipping remaining checks
	I1014 20:34:14.424122  600401 status.go:176] ha-378189-m04 status: &{Name:ha-378189-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-378189
helpers_test.go:243: (dbg) docker inspect ha-378189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	        "Created": "2025-10-14T20:19:36.721865815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:25:43.632516423Z",
	            "FinishedAt": "2025-10-14T20:25:42.992747077Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hosts",
	        "LogPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2-json.log",
	        "Name": "/ha-378189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-378189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-378189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	                "LowerDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-378189",
	                "Source": "/var/lib/docker/volumes/ha-378189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-378189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-378189",
	                "name.minikube.sigs.k8s.io": "ha-378189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05a8c6a7d89e6c1e1256598987cf30badae51e341335b4f7acba8aa6e9381f33",
	            "SandboxKey": "/var/run/docker/netns/05a8c6a7d89e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-378189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:83:31:11:3e:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b3483fd8ccce49d2e0fd5e22713527a7e829a5fed87dbabc5f1c994fe1baa1e7",
	                    "EndpointID": "ba620ff7fedd67e5484d20b7d869783e6c44a023d61f3b6e243822cf66dbdde7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-378189",
	                        "bbe33f2b58f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-378189 -n ha-378189
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 logs -n 25: (1.488194489s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp testdata/cp-test.txt ha-378189-m04:/home/docker/cp-test.txt                                                             │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m04.txt │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m04_ha-378189.txt                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189.txt                                                 │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node start m02 --alsologtostderr -v 5                                                                                      │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:25 UTC │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ stop    │ ha-378189 stop --alsologtostderr -v 5                                                                                                │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │ 14 Oct 25 20:25 UTC │
	│ start   │ ha-378189 start --wait true --alsologtostderr -v 5                                                                                   │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:34 UTC │                     │
	│ node    │ ha-378189 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:34 UTC │ 14 Oct 25 20:34 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:25:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:25:43.361106  594320 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:25:43.361469  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361502  594320 out.go:374] Setting ErrFile to fd 2...
	I1014 20:25:43.361522  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361834  594320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:25:43.362250  594320 out.go:368] Setting JSON to false
	I1014 20:25:43.363143  594320 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11294,"bootTime":1760462250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:25:43.363234  594320 start.go:141] virtualization:  
	I1014 20:25:43.366372  594320 out.go:179] * [ha-378189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:25:43.370273  594320 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:25:43.370502  594320 notify.go:220] Checking for updates...
	I1014 20:25:43.376212  594320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:25:43.379153  594320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:43.382007  594320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:25:43.384781  594320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:25:43.387595  594320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:25:43.391018  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:43.391127  594320 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:25:43.417968  594320 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:25:43.418090  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.479361  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.470111217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.479472  594320 docker.go:318] overlay module found
	I1014 20:25:43.482541  594320 out.go:179] * Using the docker driver based on existing profile
	I1014 20:25:43.485265  594320 start.go:305] selected driver: docker
	I1014 20:25:43.485282  594320 start.go:925] validating driver "docker" against &{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.485422  594320 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:25:43.485520  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.540613  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.531183177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.541069  594320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:25:43.541095  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:43.541149  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:43.541196  594320 start.go:349] cluster config:
	{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.546335  594320 out.go:179] * Starting "ha-378189" primary control-plane node in "ha-378189" cluster
	I1014 20:25:43.549190  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:43.552105  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:43.554956  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:43.555008  594320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 20:25:43.555023  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:43.555058  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:43.555118  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:43.555128  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:43.555280  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.574403  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:43.574427  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:43.574448  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:43.574472  594320 start.go:360] acquireMachinesLock for ha-378189: {Name:mk72998cded3d94d4d24b35423c9bb42df68dec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:43.574536  594320 start.go:364] duration metric: took 38.794µs to acquireMachinesLock for "ha-378189"
	I1014 20:25:43.574566  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:43.574582  594320 fix.go:54] fixHost starting: 
	I1014 20:25:43.574842  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.592022  594320 fix.go:112] recreateIfNeeded on ha-378189: state=Stopped err=<nil>
	W1014 20:25:43.592054  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:43.595275  594320 out.go:252] * Restarting existing docker container for "ha-378189" ...
	I1014 20:25:43.595377  594320 cli_runner.go:164] Run: docker start ha-378189
	I1014 20:25:43.835199  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.862801  594320 kic.go:430] container "ha-378189" state is running.
	I1014 20:25:43.863545  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:43.891763  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.891990  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:43.892050  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:43.915656  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:43.915978  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:43.915987  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:43.916996  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:25:47.063750  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.063771  594320 ubuntu.go:182] provisioning hostname "ha-378189"
	I1014 20:25:47.063837  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.081648  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.081949  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.081959  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189 && echo "ha-378189" | sudo tee /etc/hostname
	I1014 20:25:47.237261  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.237361  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.256023  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.256364  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.256388  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:47.404333  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:47.404362  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:47.404398  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:47.404407  594320 provision.go:84] configureAuth start
	I1014 20:25:47.404474  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:47.421674  594320 provision.go:143] copyHostCerts
	I1014 20:25:47.421717  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421746  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:47.421765  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421841  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:47.421929  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421951  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:47.421956  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421989  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:47.422033  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422054  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:47.422061  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422086  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:47.422140  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189 san=[127.0.0.1 192.168.49.2 ha-378189 localhost minikube]
	I1014 20:25:47.717715  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:47.717791  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:47.717832  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.737857  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:47.839808  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:47.839867  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:47.856581  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:47.856687  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:25:47.873767  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:47.873865  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:47.890677  594320 provision.go:87] duration metric: took 486.243045ms to configureAuth
	I1014 20:25:47.890712  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:47.890978  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:47.891083  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.907847  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.908141  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.908155  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:48.237643  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:48.237669  594320 machine.go:96] duration metric: took 4.345669995s to provisionDockerMachine
	I1014 20:25:48.237681  594320 start.go:293] postStartSetup for "ha-378189" (driver="docker")
	I1014 20:25:48.237692  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:48.237753  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:48.237810  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.258809  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.360095  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:48.363412  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:48.363442  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:48.363453  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:48.363504  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:48.363585  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:48.363596  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:48.363699  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:48.371847  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:48.388783  594320 start.go:296] duration metric: took 151.087012ms for postStartSetup
	I1014 20:25:48.388858  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:48.388903  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.405740  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.505335  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:48.509958  594320 fix.go:56] duration metric: took 4.935376593s for fixHost
	I1014 20:25:48.509986  594320 start.go:83] releasing machines lock for "ha-378189", held for 4.93543709s
	I1014 20:25:48.510055  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:48.526206  594320 ssh_runner.go:195] Run: cat /version.json
	I1014 20:25:48.526270  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.526531  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:48.526586  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.548290  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.549796  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.648126  594320 ssh_runner.go:195] Run: systemctl --version
	I1014 20:25:48.747253  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:48.786090  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:48.790656  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:48.790738  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:48.798572  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:48.798598  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:48.798629  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:48.798676  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:48.814126  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:48.827342  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:48.827404  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:48.843057  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:48.856133  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:48.960636  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:49.074665  594320 docker.go:234] disabling docker service ...
	I1014 20:25:49.074816  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:49.089288  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:49.101798  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:49.217149  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:49.331613  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:49.345157  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:49.358849  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:49.358915  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.367341  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:49.367402  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.377099  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.390988  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.399943  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:49.407841  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.416280  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.424463  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.433237  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:49.440777  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:49.447969  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:49.570863  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:49.701384  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:49.701490  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:49.705417  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:49.705523  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:49.708808  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:49.731837  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:49.731976  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.758582  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.794917  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:25:49.797793  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:25:49.814631  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:25:49.818852  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:49.829148  594320 kubeadm.go:883] updating cluster {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:25:49.829306  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:49.829365  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.862281  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.862307  594320 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:25:49.862361  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.887278  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.887305  594320 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:25:49.887314  594320 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:25:49.887406  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:25:49.887491  594320 ssh_runner.go:195] Run: crio config
	I1014 20:25:49.950026  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:49.950049  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:49.950075  594320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:25:49.950098  594320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-378189 NodeName:ha-378189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:25:49.950235  594320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-378189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:25:49.950258  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:25:49.950319  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:25:49.962120  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:49.962220  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:25:49.962284  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:25:49.970701  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:25:49.970828  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:25:49.977995  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:25:49.990750  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:25:50.005887  594320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1014 20:25:50.021284  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:25:50.035924  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:25:50.040106  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:50.050560  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:50.176106  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:25:50.192829  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.2
	I1014 20:25:50.192902  594320 certs.go:195] generating shared ca certs ...
	I1014 20:25:50.192934  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:50.193147  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:25:50.193245  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:25:50.193272  594320 certs.go:257] generating profile certs ...
	I1014 20:25:50.193429  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:25:50.193492  594320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1
	I1014 20:25:50.193549  594320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1014 20:25:51.264129  594320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 ...
	I1014 20:25:51.264211  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1: {Name:mk888fb5869312f9302bb4d116c9798036158a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264486  594320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 ...
	I1014 20:25:51.264521  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1: {Name:mk37f1a7286bb5552e328ef092b8b679d52c9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264668  594320 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt
	I1014 20:25:51.264858  594320 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key
	I1014 20:25:51.265044  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:25:51.265080  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:25:51.265114  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:25:51.265162  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:25:51.265197  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:25:51.265228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:25:51.265275  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:25:51.265318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:25:51.265346  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:25:51.265429  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:25:51.265484  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:25:51.265514  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:25:51.265568  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:25:51.265617  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:25:51.265670  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:25:51.265740  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:51.265797  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.266224  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.266279  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.266869  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:25:51.293705  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:25:51.326310  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:25:51.350664  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:25:51.384349  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:25:51.428733  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:25:51.452875  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:25:51.478413  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:25:51.512863  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:25:51.549423  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:25:51.576302  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:25:51.612732  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:25:51.637495  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:25:51.649332  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:25:51.668042  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674160  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674241  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.728425  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:25:51.739014  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:25:51.750524  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757453  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757551  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.825539  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:25:51.837092  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:25:51.848450  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853823  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853915  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.903510  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:25:51.914011  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:25:51.919570  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:25:51.968390  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:25:52.020296  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:25:52.082590  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:25:52.139873  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:25:52.202457  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:25:52.250943  594320 kubeadm.go:400] StartCluster: {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:52.251111  594320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:25:52.251193  594320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:25:52.305687  594320 cri.go:89] found id: "a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59"
	I1014 20:25:52.305737  594320 cri.go:89] found id: "29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	I1014 20:25:52.305743  594320 cri.go:89] found id: "6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b"
	I1014 20:25:52.305748  594320 cri.go:89] found id: "1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42"
	I1014 20:25:52.305751  594320 cri.go:89] found id: "e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa"
	I1014 20:25:52.305756  594320 cri.go:89] found id: ""
	I1014 20:25:52.305830  594320 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 20:25:52.324825  594320 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:25:52Z" level=error msg="open /run/runc: no such file or directory"
	I1014 20:25:52.324939  594320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:25:52.340043  594320 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:25:52.340101  594320 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:25:52.340184  594320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:25:52.356489  594320 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:52.357142  594320 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-378189" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.357314  594320 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "ha-378189" cluster setting kubeconfig missing "ha-378189" context setting]
	I1014 20:25:52.357700  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.358694  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:25:52.359335  594320 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:25:52.359405  594320 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:25:52.359415  594320 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:25:52.359420  594320 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:25:52.359425  594320 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:25:52.359381  594320 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:25:52.359974  594320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:25:52.376163  594320 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:25:52.376206  594320 kubeadm.go:601] duration metric: took 36.093342ms to restartPrimaryControlPlane
	I1014 20:25:52.376215  594320 kubeadm.go:402] duration metric: took 125.292882ms to StartCluster
	I1014 20:25:52.376231  594320 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.376308  594320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.377241  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.377523  594320 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:25:52.377556  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:25:52.377564  594320 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:25:52.378287  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.384056  594320 out.go:179] * Enabled addons: 
	I1014 20:25:52.386972  594320 addons.go:514] duration metric: took 9.377075ms for enable addons: enabled=[]
	I1014 20:25:52.387045  594320 start.go:246] waiting for cluster config update ...
	I1014 20:25:52.387060  594320 start.go:255] writing updated cluster config ...
	I1014 20:25:52.390584  594320 out.go:203] 
	I1014 20:25:52.393809  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.393941  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.397423  594320 out.go:179] * Starting "ha-378189-m02" control-plane node in "ha-378189" cluster
	I1014 20:25:52.400301  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:52.403231  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:52.406009  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:52.406048  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:52.406156  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:52.406172  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:52.406316  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.406581  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:52.433409  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:52.433441  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:52.433455  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:52.433497  594320 start.go:360] acquireMachinesLock for ha-378189-m02: {Name:mk92416bb80d0c49acbc35d0990b125674d2e01f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:52.433567  594320 start.go:364] duration metric: took 39.312µs to acquireMachinesLock for "ha-378189-m02"
	I1014 20:25:52.433592  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:52.433606  594320 fix.go:54] fixHost starting: m02
	I1014 20:25:52.433897  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.466174  594320 fix.go:112] recreateIfNeeded on ha-378189-m02: state=Stopped err=<nil>
	W1014 20:25:52.466206  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:52.469507  594320 out.go:252] * Restarting existing docker container for "ha-378189-m02" ...
	I1014 20:25:52.469612  594320 cli_runner.go:164] Run: docker start ha-378189-m02
	I1014 20:25:52.947382  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.976894  594320 kic.go:430] container "ha-378189-m02" state is running.
	I1014 20:25:52.977343  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:53.015718  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:53.015986  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:53.016064  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:53.055604  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:53.055958  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:53.055970  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:53.056832  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56096->127.0.0.1:33552: read: connection reset by peer
	I1014 20:25:56.232015  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.232096  594320 ubuntu.go:182] provisioning hostname "ha-378189-m02"
	I1014 20:25:56.232187  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.268557  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.268869  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.268879  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m02 && echo "ha-378189-m02" | sudo tee /etc/hostname
	I1014 20:25:56.465757  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.465917  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.491913  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.492225  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.492241  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:56.672985  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:56.673014  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:56.673042  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:56.673051  594320 provision.go:84] configureAuth start
	I1014 20:25:56.673146  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:56.705878  594320 provision.go:143] copyHostCerts
	I1014 20:25:56.705922  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.705956  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:56.705963  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.706040  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:56.706163  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706183  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:56.706187  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706214  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:56.706263  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706278  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:56.706282  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706307  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:56.706362  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m02 san=[127.0.0.1 192.168.49.3 ha-378189-m02 localhost minikube]
	I1014 20:25:57.297271  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:57.297355  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:57.297408  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.315464  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:57.428975  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:57.429037  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:25:57.452816  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:57.452885  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:57.471294  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:57.471361  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:57.489442  594320 provision.go:87] duration metric: took 816.37455ms to configureAuth
	I1014 20:25:57.489480  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:57.489743  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:57.489867  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.514270  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:57.514577  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:57.514591  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:57.886060  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:57.886080  594320 machine.go:96] duration metric: took 4.870076381s to provisionDockerMachine
	I1014 20:25:57.886091  594320 start.go:293] postStartSetup for "ha-378189-m02" (driver="docker")
	I1014 20:25:57.886102  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:57.886163  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:57.886208  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.904127  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.005798  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:58.011149  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:58.011183  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:58.011195  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:58.011261  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:58.011342  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:58.011354  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:58.011454  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:58.019728  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:58.038734  594320 start.go:296] duration metric: took 152.626218ms for postStartSetup
	I1014 20:25:58.038827  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:58.038875  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.056476  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.157712  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:58.162651  594320 fix.go:56] duration metric: took 5.729042528s for fixHost
	I1014 20:25:58.162676  594320 start.go:83] releasing machines lock for "ha-378189-m02", held for 5.729094813s
	I1014 20:25:58.162747  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:58.183647  594320 out.go:179] * Found network options:
	I1014 20:25:58.186478  594320 out.go:179]   - NO_PROXY=192.168.49.2
	W1014 20:25:58.189324  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:25:58.189372  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:25:58.189452  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:58.189507  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.189778  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:58.189920  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.209437  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.218677  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.415461  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:58.420177  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:58.420296  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:58.428278  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:58.428300  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:58.428405  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:58.428468  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:58.444844  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:58.457892  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:58.457983  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:58.475762  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:58.489635  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:58.621200  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:58.796095  594320 docker.go:234] disabling docker service ...
	I1014 20:25:58.796209  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:58.827751  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:58.844957  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:59.096618  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:59.311841  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:59.331768  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:59.354924  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:59.355038  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.367271  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:59.367384  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.382002  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.395622  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.410728  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:59.426726  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.444708  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.458277  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.469637  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:59.481336  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:59.491928  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:59.720255  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:59.958409  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:59.958540  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:59.962293  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:59.962423  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:59.966088  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:59.991722  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:59.991842  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.113230  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.165429  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:26:00.170886  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:26:00.174530  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:26:00.218865  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:26:00.228535  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:00.249249  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:26:00.249528  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:00.249874  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:26:00.286648  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:26:00.286974  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.3
	I1014 20:26:00.287300  594320 certs.go:195] generating shared ca certs ...
	I1014 20:26:00.287327  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:26:00.287479  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:26:00.287528  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:26:00.287537  594320 certs.go:257] generating profile certs ...
	I1014 20:26:00.287629  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:26:00.287704  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.31763e79
	I1014 20:26:00.287740  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:26:00.287749  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:26:00.287763  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:26:00.287775  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:26:00.287791  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:26:00.287805  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:26:00.287818  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:26:00.287829  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:26:00.287851  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:26:00.287914  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:26:00.287952  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:26:00.287961  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:26:00.287988  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:26:00.288014  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:26:00.288040  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:26:00.288093  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:00.288127  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.288141  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.288152  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.288213  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:26:00.323762  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:26:00.428739  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:26:00.433364  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:26:00.442968  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:26:00.447387  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:26:00.457017  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:26:00.461913  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:26:00.472168  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:26:00.476286  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:26:00.486124  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:26:00.490540  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:26:00.499485  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:26:00.503069  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:26:00.511944  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:26:00.532142  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:26:00.549917  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:26:00.568202  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:26:00.586156  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:26:00.603803  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:26:00.621171  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:26:00.638805  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:26:00.656530  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:26:00.674792  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:26:00.693738  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:26:00.710998  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:26:00.723268  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:26:00.736291  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:26:00.748826  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:26:00.764190  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:26:00.786939  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:26:00.807509  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:26:00.822235  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:26:00.829716  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:26:00.839312  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844086  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844196  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.887371  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:26:00.895171  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:26:00.903441  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907140  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907258  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.947880  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:26:00.955732  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:26:00.964076  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967857  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967921  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:01.009173  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:26:01.017414  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:26:01.021430  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:26:01.062823  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:26:01.103976  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:26:01.145434  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:26:01.188131  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:26:01.229876  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:26:01.271831  594320 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1014 20:26:01.271953  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:26:01.271983  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:26:01.272034  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:26:01.286472  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:26:01.286553  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:26:01.286643  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:26:01.295904  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:26:01.296015  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:26:01.304007  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:26:01.318122  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:26:01.331599  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:26:01.348453  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:26:01.353197  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:01.364628  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.509428  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.524168  594320 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:26:01.524671  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:01.530763  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:26:01.533842  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.682729  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.697778  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:26:01.697855  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:26:01.698087  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m02" to be "Ready" ...
	W1014 20:26:11.701290  594320 node_ready.go:55] error getting node "ha-378189-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02": net/http: TLS handshake timeout
	I1014 20:26:15.977480  594320 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02"
	I1014 20:26:19.960495  594320 node_ready.go:49] node "ha-378189-m02" is "Ready"
	I1014 20:26:19.960524  594320 node_ready.go:38] duration metric: took 18.262413251s for node "ha-378189-m02" to be "Ready" ...
	I1014 20:26:19.960537  594320 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:26:19.960604  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.461531  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.961464  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.461540  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.961480  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.461610  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.961486  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.977774  594320 api_server.go:72] duration metric: took 21.453514103s to wait for apiserver process to appear ...
	I1014 20:26:22.977840  594320 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:26:22.977876  594320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:26:22.986624  594320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:26:22.988140  594320 api_server.go:141] control plane version: v1.34.1
	I1014 20:26:22.988198  594320 api_server.go:131] duration metric: took 10.336359ms to wait for apiserver health ...
	I1014 20:26:22.988222  594320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:26:23.017914  594320 system_pods.go:59] 26 kube-system pods found
	I1014 20:26:23.018013  594320 system_pods.go:61] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018041  594320 system_pods.go:61] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018081  594320 system_pods.go:61] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.018111  594320 system_pods.go:61] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.018134  594320 system_pods.go:61] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.018156  594320 system_pods.go:61] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.018187  594320 system_pods.go:61] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.018213  594320 system_pods.go:61] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.018237  594320 system_pods.go:61] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.018263  594320 system_pods.go:61] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.018296  594320 system_pods.go:61] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.018325  594320 system_pods.go:61] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.018353  594320 system_pods.go:61] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.018378  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.018412  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.018440  594320 system_pods.go:61] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.018463  594320 system_pods.go:61] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.018488  594320 system_pods.go:61] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.018518  594320 system_pods.go:61] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.018546  594320 system_pods.go:61] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.018569  594320 system_pods.go:61] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.018590  594320 system_pods.go:61] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.018622  594320 system_pods.go:61] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.018648  594320 system_pods.go:61] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.018672  594320 system_pods.go:61] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.018696  594320 system_pods.go:61] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.018729  594320 system_pods.go:74] duration metric: took 30.48728ms to wait for pod list to return data ...
	I1014 20:26:23.018756  594320 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:26:23.026171  594320 default_sa.go:45] found service account: "default"
	I1014 20:26:23.026235  594320 default_sa.go:55] duration metric: took 7.458111ms for default service account to be created ...
	I1014 20:26:23.026258  594320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:26:23.039389  594320 system_pods.go:86] 26 kube-system pods found
	I1014 20:26:23.039477  594320 system_pods.go:89] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039504  594320 system_pods.go:89] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039543  594320 system_pods.go:89] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.039568  594320 system_pods.go:89] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.039589  594320 system_pods.go:89] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.039615  594320 system_pods.go:89] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.039649  594320 system_pods.go:89] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.039674  594320 system_pods.go:89] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.039695  594320 system_pods.go:89] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.039724  594320 system_pods.go:89] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.039755  594320 system_pods.go:89] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.039784  594320 system_pods.go:89] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.039823  594320 system_pods.go:89] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.039845  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.039882  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.039900  594320 system_pods.go:89] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.039921  594320 system_pods.go:89] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.039955  594320 system_pods.go:89] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.039975  594320 system_pods.go:89] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.039993  594320 system_pods.go:89] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.040012  594320 system_pods.go:89] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.040031  594320 system_pods.go:89] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.040060  594320 system_pods.go:89] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.040088  594320 system_pods.go:89] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.040111  594320 system_pods.go:89] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.040136  594320 system_pods.go:89] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.040170  594320 system_pods.go:126] duration metric: took 13.891803ms to wait for k8s-apps to be running ...
	I1014 20:26:23.040201  594320 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:26:23.040293  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:26:23.063946  594320 system_svc.go:56] duration metric: took 23.73711ms WaitForService to wait for kubelet
	I1014 20:26:23.064017  594320 kubeadm.go:586] duration metric: took 21.539761018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:26:23.064051  594320 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:26:23.082262  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082345  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082370  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082388  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082423  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082450  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082472  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082494  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082525  594320 node_conditions.go:105] duration metric: took 18.445212ms to run NodePressure ...
	I1014 20:26:23.082553  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:26:23.082589  594320 start.go:255] writing updated cluster config ...
	I1014 20:26:23.086128  594320 out.go:203] 
	I1014 20:26:23.089286  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:23.089471  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.092927  594320 out.go:179] * Starting "ha-378189-m03" control-plane node in "ha-378189" cluster
	I1014 20:26:23.096535  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:26:23.099560  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:26:23.102321  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:26:23.102371  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:26:23.102397  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:26:23.102501  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:26:23.102517  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:26:23.102654  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.136476  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:26:23.136495  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:26:23.136508  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:26:23.136531  594320 start.go:360] acquireMachinesLock for ha-378189-m03: {Name:mk212b55d7ba2de9e9f1edd93e6a58b4ed6f2ea7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:26:23.136583  594320 start.go:364] duration metric: took 37.113µs to acquireMachinesLock for "ha-378189-m03"
	I1014 20:26:23.136602  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:26:23.136607  594320 fix.go:54] fixHost starting: m03
	I1014 20:26:23.136861  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.180150  594320 fix.go:112] recreateIfNeeded on ha-378189-m03: state=Stopped err=<nil>
	W1014 20:26:23.180176  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:26:23.183408  594320 out.go:252] * Restarting existing docker container for "ha-378189-m03" ...
	I1014 20:26:23.183496  594320 cli_runner.go:164] Run: docker start ha-378189-m03
	I1014 20:26:23.583316  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.617146  594320 kic.go:430] container "ha-378189-m03" state is running.
	I1014 20:26:23.617687  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:23.654101  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.654351  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:26:23.654417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:23.679159  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:23.679663  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:23.679677  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:26:23.680477  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:26:26.929139  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:26.929205  594320 ubuntu.go:182] provisioning hostname "ha-378189-m03"
	I1014 20:26:26.929306  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:26.953169  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:26.953484  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:26.953497  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m03 && echo "ha-378189-m03" | sudo tee /etc/hostname
	I1014 20:26:27.366006  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:27.366194  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:27.395992  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:27.396293  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:27.396309  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:26:27.693852  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:26:27.693921  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:26:27.693958  594320 ubuntu.go:190] setting up certificates
	I1014 20:26:27.694000  594320 provision.go:84] configureAuth start
	I1014 20:26:27.694087  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:27.718087  594320 provision.go:143] copyHostCerts
	I1014 20:26:27.718133  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718165  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:26:27.718172  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718249  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:26:27.718323  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718348  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:26:27.718353  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718380  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:26:27.718424  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718444  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:26:27.718452  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718478  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:26:27.718527  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m03 san=[127.0.0.1 192.168.49.4 ha-378189-m03 localhost minikube]
	I1014 20:26:28.885539  594320 provision.go:177] copyRemoteCerts
	I1014 20:26:28.885609  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:26:28.885857  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:28.909377  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:29.092351  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:26:29.092437  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:26:29.150396  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:26:29.150575  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:26:29.212427  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:26:29.212488  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:26:29.291296  594320 provision.go:87] duration metric: took 1.597262612s to configureAuth
	I1014 20:26:29.291319  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:26:29.291554  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:29.291658  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.325450  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:29.325769  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:29.325785  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:26:29.937523  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:26:29.937544  594320 machine.go:96] duration metric: took 6.283184743s to provisionDockerMachine
	I1014 20:26:29.937556  594320 start.go:293] postStartSetup for "ha-378189-m03" (driver="docker")
	I1014 20:26:29.937573  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:26:29.937654  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:26:29.937717  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.965387  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.103245  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:26:30.108041  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:26:30.108068  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:26:30.108080  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:26:30.108144  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:26:30.108221  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:26:30.108228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:26:30.108351  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:26:30.118723  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:30.147273  594320 start.go:296] duration metric: took 209.701827ms for postStartSetup
	I1014 20:26:30.147363  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:26:30.147417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.167399  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.286490  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:26:30.294616  594320 fix.go:56] duration metric: took 7.158001675s for fixHost
	I1014 20:26:30.294649  594320 start.go:83] releasing machines lock for "ha-378189-m03", held for 7.158058317s
	I1014 20:26:30.294720  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:30.332338  594320 out.go:179] * Found network options:
	I1014 20:26:30.335270  594320 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1014 20:26:30.338218  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338252  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338276  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338286  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:26:30.338361  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:26:30.338399  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.338666  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:26:30.338727  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.371445  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.378598  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.685955  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:26:30.710071  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:26:30.710156  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:26:30.762990  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:26:30.763023  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:26:30.763054  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:26:30.763113  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:26:30.789796  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:26:30.813445  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:26:30.813530  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:26:30.839481  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:26:30.861859  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:26:31.219302  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:26:31.622468  594320 docker.go:234] disabling docker service ...
	I1014 20:26:31.622554  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:26:31.669472  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:26:31.699352  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:26:32.097354  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:26:32.482606  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:26:32.523674  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:26:32.577860  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:26:32.577931  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.625402  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:26:32.625482  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.642162  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.655785  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.690980  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:26:32.710947  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.738596  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.764228  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.794648  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:26:32.819907  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:26:32.841183  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:33.263701  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:28:03.725730  594320 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.461946041s)
	I1014 20:28:03.725755  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:28:03.725818  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:28:03.730985  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:28:03.731051  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:28:03.734613  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:28:03.777080  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:28:03.777171  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.808074  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.842942  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:28:03.845779  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:28:03.848732  594320 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1014 20:28:03.851762  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:28:03.869115  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:28:03.872847  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:03.883220  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:28:03.883459  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:03.883722  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:28:03.902527  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:28:03.902796  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.4
	I1014 20:28:03.902811  594320 certs.go:195] generating shared ca certs ...
	I1014 20:28:03.902825  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:28:03.902952  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:28:03.902998  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:28:03.903010  594320 certs.go:257] generating profile certs ...
	I1014 20:28:03.903083  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:28:03.903150  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.02897361
	I1014 20:28:03.903194  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:28:03.903292  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:28:03.903318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:28:03.903337  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:28:03.903356  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:28:03.903373  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:28:03.903388  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:28:03.903405  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:28:03.903422  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:28:03.903477  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:28:03.903513  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:28:03.903525  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:28:03.903550  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:28:03.903577  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:28:03.903603  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:28:03.903647  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:28:03.903678  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:03.903693  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:28:03.903709  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:28:03.903766  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:28:03.921249  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:28:04.020701  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:28:04.026586  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:28:04.037048  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:28:04.040988  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:28:04.050562  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:28:04.054508  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:28:04.063039  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:28:04.066687  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:28:04.075046  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:28:04.079077  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:28:04.087651  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:28:04.091714  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:28:04.099957  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:28:04.120044  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:28:04.141157  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:28:04.160012  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:28:04.182538  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:28:04.206993  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:28:04.231475  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:28:04.253537  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:28:04.275116  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:28:04.294127  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:28:04.313010  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:28:04.337638  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:28:04.350377  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:28:04.365979  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:28:04.380227  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:28:04.402519  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:28:04.428480  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:28:04.444300  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:28:04.465409  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:28:04.472158  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:28:04.481746  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.485919  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.486029  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.534226  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:28:04.542709  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:28:04.552210  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.557889  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.558010  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.600600  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:28:04.608591  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:28:04.617463  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621083  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621172  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.663633  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:28:04.671866  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:28:04.675585  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:28:04.717923  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:28:04.759364  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:28:04.801264  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:28:04.842553  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:28:04.888303  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:28:04.930552  594320 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1014 20:28:04.930687  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:28:04.930727  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:28:04.930788  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:28:04.948125  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:28:04.948234  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:28:04.948301  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:28:04.956444  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:28:04.956519  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:28:04.964479  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:28:04.980043  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:28:04.994905  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:28:05.014277  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:28:05.019048  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:05.032269  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.165397  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.183614  594320 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:28:05.184053  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:05.189131  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:28:05.192051  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.332393  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.346845  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:28:05.346924  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:28:05.347150  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m03" to be "Ready" ...
	W1014 20:28:07.351440  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:09.352350  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:11.851687  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:13.851750  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:16.351273  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:18.351732  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:20.354256  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:22.851172  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:24.851281  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:27.351420  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:29.851348  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:32.351357  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:34.851229  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:36.852278  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:39.352122  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:41.851851  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:44.351532  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:46.850950  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:49.352299  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:51.850817  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:53.851323  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:56.350480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:58.351469  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:00.361289  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:02.851458  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:05.350476  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:07.350815  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:09.351365  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:11.851059  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:13.851563  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:16.351339  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:18.850245  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:20.855138  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:23.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:25.852768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:28.351284  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:30.351537  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:32.851539  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:35.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:37.850912  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:39.851077  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:41.851598  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:44.351457  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:46.351534  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:48.850734  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:50.851231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:52.851432  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:55.351286  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:57.351688  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:59.851092  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:01.852021  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:04.350799  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:06.851328  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:09.351536  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:11.353512  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:13.851897  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:15.852000  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:18.351490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:20.851616  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:23.351164  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:25.351400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:27.352193  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:29.850751  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:31.851102  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:34.350486  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:36.351583  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:38.851490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:41.350742  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:43.357724  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:45.850854  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:47.851206  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:50.350789  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:52.850626  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:54.851276  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:57.350761  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:59.351538  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:01.851391  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:04.350763  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:06.850386  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:08.851237  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:11.351648  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:13.850996  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:15.851345  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:18.351324  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:20.355277  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:22.851213  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:25.350735  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:27.850574  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:29.856637  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:32.350493  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:34.351300  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:36.353689  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:38.851719  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:40.852010  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:43.351491  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:45.354360  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:47.851118  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:49.851408  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:52.351137  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:54.851402  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:57.350516  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:59.351590  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:01.851121  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:04.350628  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:06.350691  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:08.351228  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:10.851190  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:13.351117  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:15.850576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:17.851400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:20.351576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:22.851057  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:25.351322  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:27.355638  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:29.851145  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:31.851487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:34.352259  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:36.850522  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:39.351831  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:41.850768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:44.350717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:46.351296  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:48.850913  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:50.851023  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:53.350835  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:55.353231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:57.851446  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:00.354501  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:02.850997  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:05.350786  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:07.350850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:09.351182  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:11.850869  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:13.851809  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:16.351480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:18.850797  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:20.851760  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:23.351587  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:25.850979  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:28.351624  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:30.851397  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:33.351390  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:35.351573  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:37.850906  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:40.351625  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:42.852099  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:45.352080  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:47.850966  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:49.851115  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:51.851586  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:54.350406  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:56.350741  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:58.850717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:00.850850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:02.851136  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	I1014 20:34:05.347342  594320 node_ready.go:38] duration metric: took 6m0.000162726s for node "ha-378189-m03" to be "Ready" ...
	I1014 20:34:05.350610  594320 out.go:203] 
	W1014 20:34:05.353525  594320 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:34:05.353550  594320 out.go:285] * 
	W1014 20:34:05.355696  594320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:34:05.358382  594320 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:21 ha-378189 crio[666]: time="2025-10-14T20:26:21.589653552Z" level=info msg="Started container" PID=1152 containerID=b52d43b08a0f0f3d58627b50e98ad91edb2949ba45cac9addb1a701ea81a7ab6 description=default/busybox-7b57f96db7-q27pj/busybox id=3a27ea99-be07-47d1-af55-67ce0d5512da name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6cb5d57424988afbfa1dc5617cd0c429ceb241689cd63972a803390cece10e9
	Oct 14 20:26:52 ha-378189 conmon[1156]: conmon 363f25aac2f0263c8db4 <ninfo>: container 1159 exited with status 1
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.625980521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fe72c90-cb4c-4e12-b9ae-04fb5a121866 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.627066339Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fbaeb554-6cf4-4d97-8ba3-e4435db48ee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628061243Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628282768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635415286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635711348Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/passwd: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.63580049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/group: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.636120757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.66493445Z" level=info msg="Created container 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.666015739Z" level=info msg="Starting container: 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313" id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.669895504Z" level=info msg="Started container" PID=1401 containerID=80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313 description=kube-system/storage-provisioner/storage-provisioner id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=0baae86d64cea1e60128930d082a0936798e1ae0fa7d9cc62159fa6df0e1584f
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.588845238Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592539237Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592598881Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592618352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59592804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.595974687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59599566Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599034558Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599072967Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599095368Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602072653Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602106582Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	80de2a13d9d5b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   0baae86d64cea       storage-provisioner                 kube-system
	ada8dd32f25e1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   c1d0c1569b9ee       coredns-66bc5c9577-rdfdx            kube-system
	626c2287b0728       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   7d46cbab6384c       coredns-66bc5c9577-fqrq6            kube-system
	363f25aac2f02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   0baae86d64cea       storage-provisioner                 kube-system
	1e6277796c71e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   192c9d36b9c8a       kube-proxy-vcfj7                    kube-system
	b52d43b08a0f0       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d6cb5d5742498       busybox-7b57f96db7-q27pj            default
	19b8d6e7d9d03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   bfe53b6aba234       kindnet-xs5v9                       kube-system
	d232b32e38df9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	750164802f535       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            2                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	a140acf9afcdd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	29d51a80b14a0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	6eb0610f607e1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d05d495ee5973       kube-vip-ha-378189                  kube-system
	1df4877419cf7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   0e3b61d4f5605       kube-scheduler-ha-378189            kube-system
	e43d3be1d91f3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   eaed44524556d       etcd-ha-378189                      kube-system
	
	
	==> coredns [626c2287b07287457eac7f6e1b44647d8632ec7bb0a670aa408f969a93c332af] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53276 - 13110 "HINFO IN 6427901219273669792.4527693482397380301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022504393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ada8dd32f25e1279de4ffb5acc3114c55c10d41b5d58fa972ded2662fadec5c9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40316 - 29901 "HINFO IN 5531300076892638066.4168205942486472486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013682922s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-378189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-378189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0978d9c8ec5d4898bfbd451e1fb9d4bc
	  System UUID:                3bbab8a9-98e5-49cd-8237-de9f16fcba7a
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q27pj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-fqrq6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-rdfdx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-378189                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-xs5v9                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-378189             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-378189    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-vcfj7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-378189             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-378189                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m51s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-378189 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   Starting                 8m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m25s)  kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	
	
	Name:               ha-378189-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-378189-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 efd151379e5c4b2ebd4e80e64c360eed
	  System UUID:                3e1bc125-e145-47d1-8100-88ab5dc3051b
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tkxmq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-378189-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qjxhf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-378189-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-378189-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-2cc9g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-378189-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-378189-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m38s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   NodeHasSufficientPID     9m32s (x8 over 9m32s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m32s (x8 over 9m32s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m32s (x8 over 9m32s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             8m58s                  node-controller  Node ha-378189-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   Starting                 8m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m21s (x8 over 8m21s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m21s (x8 over 8m21s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m21s (x8 over 8m21s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	
	
	Name:               ha-378189-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_23_23_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:23:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:25:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-378189-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d1e8ea2da5b40c08e7704d42f98e14c
	  System UUID:                a2a231a7-55aa-4186-9e9a-7dad075b6ff1
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54mnm       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-dkq7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-378189-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m54s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m47s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m41s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeNotReady             6m57s              node-controller  Node ha-378189-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	[Oct14 20:05] overlayfs: idmapped layers are currently not supported
	[Oct14 20:06] overlayfs: idmapped layers are currently not supported
	[Oct14 20:19] overlayfs: idmapped layers are currently not supported
	[Oct14 20:20] overlayfs: idmapped layers are currently not supported
	[Oct14 20:21] overlayfs: idmapped layers are currently not supported
	[Oct14 20:23] overlayfs: idmapped layers are currently not supported
	[Oct14 20:24] overlayfs: idmapped layers are currently not supported
	[Oct14 20:25] overlayfs: idmapped layers are currently not supported
	[  +4.162636] overlayfs: idmapped layers are currently not supported
	[Oct14 20:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa] <==
	{"level":"info","ts":"2025-10-14T20:28:06.190047Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"77cd6ee1d0a2b5a3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-14T20:28:06.190169Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.222402Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.223226Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-14T20:34:09.523117Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12520427846644105758 12593026477526642892)"}
	{"level":"info","ts":"2025-10-14T20:34:09.525140Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"77cd6ee1d0a2b5a3","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-14T20:34:09.525190Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525251Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525273Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525255Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525373Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525440Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525463Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525655Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3","error":"context canceled"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525720Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"77cd6ee1d0a2b5a3","error":"failed to read 77cd6ee1d0a2b5a3 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-10-14T20:34:09.525738Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525960Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3","error":"context canceled"}
	{"level":"info","ts":"2025-10-14T20:34:09.526001Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.526013Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.526023Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.573763Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.582489Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"77cd6ee1d0a2b5a3"}
	
	
	==> kernel <==
	 20:34:15 up  3:16,  0 user,  load average: 2.73, 1.75, 1.69
	Linux ha-378189 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19b8d6e7d9d0342c76a1e404b13de2161ab3c4bf2f293b51851e009a1b5f7f08] <==
	I1014 20:33:41.582564       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:41.582642       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:41.582652       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:33:51.588504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:33:51.588651       1 main.go:301] handling current node
	I1014 20:33:51.588720       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:33:51.588753       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:33:51.588972       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:33:51.589013       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:51.589196       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:51.589242       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:34:01.588534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:34:01.588567       1 main.go:301] handling current node
	I1014 20:34:01.588586       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:34:01.588593       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:34:01.588751       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:34:01.588779       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:34:01.588872       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:34:01.588899       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:34:11.582458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:34:11.582574       1 main.go:301] handling current node
	I1014 20:34:11.582632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:34:11.582647       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:34:11.582798       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:34:11.582813       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [750164802f535723e47d9b886929b01a0d9c918009411e14769f1b1bd030b93a] <==
	I1014 20:26:20.042627       1 policy_source.go:240] refreshing policies
	I1014 20:26:20.079160       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 20:26:20.093566       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 20:26:20.093627       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:26:20.112990       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:26:20.113438       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 20:26:20.113624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:26:20.119480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 20:26:20.119521       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 20:26:20.119613       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 20:26:20.119654       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 20:26:20.121370       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 20:26:20.126606       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 20:26:20.130342       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:26:20.136886       1 cache.go:39] Caches are synced for autoregister controller
	W1014 20:26:20.207177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1014 20:26:20.213176       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:26:20.235225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1014 20:26:20.242018       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1014 20:26:20.523492       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:26:20.826556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 20:26:22.469181       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1014 20:26:28.935001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:26:29.018133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:26:36.974045       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59] <==
	I1014 20:25:51.233702       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1014 20:25:54.896784       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1014 20:25:54.896886       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1014 20:25:54.896942       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1014 20:25:54.896973       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1014 20:25:54.897001       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1014 20:25:54.897033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1014 20:25:54.897063       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1014 20:25:54.897093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1014 20:25:54.897121       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1014 20:25:54.897149       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1014 20:25:54.897178       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1014 20:25:54.897206       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1014 20:25:54.922112       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1014 20:25:54.935912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1014 20:25:54.937765       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1014 20:25:54.962374       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:25:54.968742       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1014 20:25:54.969218       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 20:25:54.969513       1 instance.go:239] Using reconciler: lease
	W1014 20:25:54.971617       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.921833       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.925037       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.970811       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1014 20:26:14.970821       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab] <==
	I1014 20:25:54.406785       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:25:55.132646       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1014 20:25:55.132731       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:25:55.147018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1014 20:25:55.147823       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 20:25:55.148017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:25:55.148140       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1014 20:26:20.022395       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d232b32e38df9f91bd238189f93a91f9a75b3d07dfb77d79c8de6932a7885216] <==
	I1014 20:26:28.618601       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 20:26:28.618752       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m02"
	I1014 20:26:28.618829       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m03"
	I1014 20:26:28.618890       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m04"
	I1014 20:26:28.618954       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189"
	I1014 20:26:28.604585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.619106       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:26:28.602874       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:26:28.629252       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 20:26:28.602883       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 20:26:28.630235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 20:26:28.602893       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:26:28.630543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 20:26:28.603399       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 20:26:28.556415       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:26:28.637223       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 20:26:28.682627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.682751       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:26:28.741655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.753923       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.754530       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:26:28.754582       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:26:28.808506       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1014 20:27:02.690518       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 20:32:20.021441       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-n6bnr"
	
	
	==> kube-proxy [1e6277796c71ea2156b2442bc5bccf388b38d67c931d7ebfd1af22d8148d7f4a] <==
	I1014 20:26:22.368598       1 server_linux.go:53] "Using iptables proxy"
	I1014 20:26:23.150050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:26:23.250265       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:26:23.250370       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 20:26:23.250482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:26:24.166639       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 20:26:24.166712       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:26:24.319706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:26:24.320064       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:26:24.320240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:24.338325       1 config.go:200] "Starting service config controller"
	I1014 20:26:24.338693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:26:24.338764       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:26:24.338796       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:26:24.338833       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:26:24.338859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:26:24.341262       1 config.go:309] "Starting node config controller"
	I1014 20:26:24.342084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:26:24.342169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:26:24.440637       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:26:24.440752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:26:24.440814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42] <==
	I1014 20:25:54.629182       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:26:05.981471       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1014 20:26:05.981507       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:26:05.981516       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:26:19.941274       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:26:19.947786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:19.950118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.952381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.954023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:26:19.955708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:26:20.152543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.242979     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-378189\" already exists" pod="kube-system/kube-apiserver-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.243174     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.266372     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-378189\" already exists" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.266545     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.277022     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-378189\" already exists" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.335489     795 apiserver.go:52] "Watching apiserver"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.341320     795 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-378189" podUID="58e47d49-6119-454a-8ef7-5ff6d46dbc8e"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377684     795 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377713     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.380550     795 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e47d49-6119-454a-8ef7-5ff6d46dbc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"cri-o://6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.1\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T20:25:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-10-14T20:25:50Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-378189\": pods \"kube-vip-ha-378189\" not found" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.384848     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6eb6d052fb3fa4b9cbf90736fdb46e" path="/var/lib/kubelet/pods/5a6eb6d052fb3fa4b9cbf90736fdb46e/volumes"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.439465     795 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474075     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-cni-cfg\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474378     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-xtables-lock\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474711     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-lib-modules\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474808     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-lib-modules\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474885     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6cd440a2-0b51-4f78-8285-48bf107cf288-tmp\") pod \"storage-provisioner\" (UID: \"6cd440a2-0b51-4f78-8285-48bf107cf288\") " pod="kube-system/storage-provisioner"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474996     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-xtables-lock\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.489643     795 scope.go:117] "RemoveContainer" containerID="29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.579639     795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 20:26:21 ha-378189 kubelet[795]: W1014 20:26:20.998774     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio-c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0 WatchSource:0}: Error finding container c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0: Status 404 returned error can't find the container with id c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0
	Oct 14 20:26:21 ha-378189 kubelet[795]: I1014 20:26:21.651722     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-378189" podStartSLOduration=1.6517024089999999 podStartE2EDuration="1.651702409s" podCreationTimestamp="2025-10-14 20:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 20:26:21.43011465 +0000 UTC m=+31.232674378" watchObservedRunningTime="2025-10-14 20:26:21.651702409 +0000 UTC m=+31.454262129"
	Oct 14 20:26:50 ha-378189 kubelet[795]: E1014 20:26:50.327193     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc"
	Oct 14 20:26:50 ha-378189 kubelet[795]: I1014 20:26:50.327265     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist"
	Oct 14 20:26:52 ha-378189 kubelet[795]: I1014 20:26:52.625158     795 scope.go:117] "RemoveContainer" containerID="363f25aac2f0263c8db462f13cef072fd6f75d27229590360981202746ac71ab"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-378189 -n ha-378189
helpers_test.go:269: (dbg) Run:  kubectl --context ha-378189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q7g99
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99
helpers_test.go:290: (dbg) kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q7g99
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mll6g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mll6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  116s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  116s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-378189" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-378189\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-378189\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-378189\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-378189
helpers_test.go:243: (dbg) docker inspect ha-378189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	        "Created": "2025-10-14T20:19:36.721865815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:25:43.632516423Z",
	            "FinishedAt": "2025-10-14T20:25:42.992747077Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/hosts",
	        "LogPath": "/var/lib/docker/containers/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2-json.log",
	        "Name": "/ha-378189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-378189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-378189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2",
	                "LowerDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ac8fb770ddeadce23f587f1c58404e77e9b514b1d0afc23aafe028ba7c4d415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-378189",
	                "Source": "/var/lib/docker/volumes/ha-378189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-378189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-378189",
	                "name.minikube.sigs.k8s.io": "ha-378189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05a8c6a7d89e6c1e1256598987cf30badae51e341335b4f7acba8aa6e9381f33",
	            "SandboxKey": "/var/run/docker/netns/05a8c6a7d89e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-378189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:83:31:11:3e:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b3483fd8ccce49d2e0fd5e22713527a7e829a5fed87dbabc5f1c994fe1baa1e7",
	                    "EndpointID": "ba620ff7fedd67e5484d20b7d869783e6c44a023d61f3b6e243822cf66dbdde7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-378189",
	                        "bbe33f2b58f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-378189 -n ha-378189
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 logs -n 25: (1.472079879s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp testdata/cp-test.txt ha-378189-m04:/home/docker/cp-test.txt                                                             │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m04.txt │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m04_ha-378189.txt                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189.txt                                                 │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m02 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ cp      │ ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt               │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ ssh     │ ha-378189 ssh -n ha-378189-m03 sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt                                         │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:24 UTC │
	│ node    │ ha-378189 node start m02 --alsologtostderr -v 5                                                                                      │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:24 UTC │ 14 Oct 25 20:25 UTC │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ stop    │ ha-378189 stop --alsologtostderr -v 5                                                                                                │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │ 14 Oct 25 20:25 UTC │
	│ start   │ ha-378189 start --wait true --alsologtostderr -v 5                                                                                   │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:25 UTC │                     │
	│ node    │ ha-378189 node list --alsologtostderr -v 5                                                                                           │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:34 UTC │                     │
	│ node    │ ha-378189 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-378189 │ jenkins │ v1.37.0 │ 14 Oct 25 20:34 UTC │ 14 Oct 25 20:34 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:25:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:25:43.361106  594320 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:25:43.361469  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361502  594320 out.go:374] Setting ErrFile to fd 2...
	I1014 20:25:43.361522  594320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:25:43.361834  594320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:25:43.362250  594320 out.go:368] Setting JSON to false
	I1014 20:25:43.363143  594320 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11294,"bootTime":1760462250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:25:43.363234  594320 start.go:141] virtualization:  
	I1014 20:25:43.366372  594320 out.go:179] * [ha-378189] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:25:43.370273  594320 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:25:43.370502  594320 notify.go:220] Checking for updates...
	I1014 20:25:43.376212  594320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:25:43.379153  594320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:43.382007  594320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:25:43.384781  594320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:25:43.387595  594320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:25:43.391018  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:43.391127  594320 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:25:43.417968  594320 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:25:43.418090  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.479361  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.470111217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.479472  594320 docker.go:318] overlay module found
	I1014 20:25:43.482541  594320 out.go:179] * Using the docker driver based on existing profile
	I1014 20:25:43.485265  594320 start.go:305] selected driver: docker
	I1014 20:25:43.485282  594320 start.go:925] validating driver "docker" against &{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.485422  594320 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:25:43.485520  594320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:25:43.540613  594320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-14 20:25:43.531183177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:25:43.541069  594320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:25:43.541095  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:43.541149  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:43.541196  594320 start.go:349] cluster config:
	{Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:43.546335  594320 out.go:179] * Starting "ha-378189" primary control-plane node in "ha-378189" cluster
	I1014 20:25:43.549190  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:43.552105  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:43.554956  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:43.555008  594320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 20:25:43.555023  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:43.555058  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:43.555118  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:43.555128  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:43.555280  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.574403  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:43.574427  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:43.574448  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:43.574472  594320 start.go:360] acquireMachinesLock for ha-378189: {Name:mk72998cded3d94d4d24b35423c9bb42df68dec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:43.574536  594320 start.go:364] duration metric: took 38.794µs to acquireMachinesLock for "ha-378189"
	I1014 20:25:43.574566  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:43.574582  594320 fix.go:54] fixHost starting: 
	I1014 20:25:43.574842  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.592022  594320 fix.go:112] recreateIfNeeded on ha-378189: state=Stopped err=<nil>
	W1014 20:25:43.592054  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:43.595275  594320 out.go:252] * Restarting existing docker container for "ha-378189" ...
	I1014 20:25:43.595377  594320 cli_runner.go:164] Run: docker start ha-378189
	I1014 20:25:43.835199  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:25:43.862801  594320 kic.go:430] container "ha-378189" state is running.
	I1014 20:25:43.863545  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:43.891763  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:43.891990  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:43.892050  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:43.915656  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:43.915978  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:43.915987  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:43.916996  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:25:47.063750  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.063771  594320 ubuntu.go:182] provisioning hostname "ha-378189"
	I1014 20:25:47.063837  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.081648  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.081949  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.081959  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189 && echo "ha-378189" | sudo tee /etc/hostname
	I1014 20:25:47.237261  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189
	
	I1014 20:25:47.237361  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.256023  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.256364  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.256388  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:47.404333  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:47.404362  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:47.404398  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:47.404407  594320 provision.go:84] configureAuth start
	I1014 20:25:47.404474  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:47.421674  594320 provision.go:143] copyHostCerts
	I1014 20:25:47.421717  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421746  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:47.421765  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:47.421841  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:47.421929  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421951  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:47.421956  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:47.421989  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:47.422033  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422054  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:47.422061  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:47.422086  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:47.422140  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189 san=[127.0.0.1 192.168.49.2 ha-378189 localhost minikube]
	I1014 20:25:47.717715  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:47.717791  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:47.717832  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.737857  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:47.839808  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:47.839867  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:47.856581  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:47.856687  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:25:47.873767  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:47.873865  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:47.890677  594320 provision.go:87] duration metric: took 486.243045ms to configureAuth
	I1014 20:25:47.890712  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:47.890978  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:47.891083  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:47.907847  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:47.908141  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33547 <nil> <nil>}
	I1014 20:25:47.908155  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:48.237643  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:48.237669  594320 machine.go:96] duration metric: took 4.345669995s to provisionDockerMachine
	I1014 20:25:48.237681  594320 start.go:293] postStartSetup for "ha-378189" (driver="docker")
	I1014 20:25:48.237692  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:48.237753  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:48.237810  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.258809  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.360095  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:48.363412  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:48.363442  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:48.363453  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:48.363504  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:48.363585  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:48.363596  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:48.363699  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:48.371847  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:48.388783  594320 start.go:296] duration metric: took 151.087012ms for postStartSetup
	I1014 20:25:48.388858  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:48.388903  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.405740  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.505335  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:48.509958  594320 fix.go:56] duration metric: took 4.935376593s for fixHost
	I1014 20:25:48.509986  594320 start.go:83] releasing machines lock for "ha-378189", held for 4.93543709s
	I1014 20:25:48.510055  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:25:48.526206  594320 ssh_runner.go:195] Run: cat /version.json
	I1014 20:25:48.526270  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.526531  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:48.526586  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:25:48.548290  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.549796  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:25:48.648126  594320 ssh_runner.go:195] Run: systemctl --version
	I1014 20:25:48.747253  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:48.786090  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:48.790656  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:48.790738  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:48.798572  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:48.798598  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:48.798629  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:48.798676  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:48.814126  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:48.827342  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:48.827404  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:48.843057  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:48.856133  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:48.960636  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:49.074665  594320 docker.go:234] disabling docker service ...
	I1014 20:25:49.074816  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:49.089288  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:49.101798  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:49.217149  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:49.331613  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:49.345157  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:49.358849  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:49.358915  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.367341  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:49.367402  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.377099  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.390988  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.399943  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:49.407841  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.416280  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.424463  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:49.433237  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:49.440777  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:49.447969  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:49.570863  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:49.701384  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:49.701490  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:49.705417  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:49.705523  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:49.708808  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:49.731837  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:49.731976  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.758582  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:25:49.794917  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:25:49.797793  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:25:49.814631  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:25:49.818852  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:49.829148  594320 kubeadm.go:883] updating cluster {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:25:49.829306  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:49.829365  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.862281  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.862307  594320 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:25:49.862361  594320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:25:49.887278  594320 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:25:49.887305  594320 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:25:49.887314  594320 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:25:49.887406  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:25:49.887491  594320 ssh_runner.go:195] Run: crio config
	I1014 20:25:49.950026  594320 cni.go:84] Creating CNI manager for ""
	I1014 20:25:49.950049  594320 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 20:25:49.950075  594320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:25:49.950098  594320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-378189 NodeName:ha-378189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:25:49.950235  594320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-378189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:25:49.950258  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:25:49.950319  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:25:49.962120  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:49.962220  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:25:49.962284  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:25:49.970701  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:25:49.970828  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:25:49.977995  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:25:49.990750  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:25:50.005887  594320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1014 20:25:50.021284  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:25:50.035924  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:25:50.040106  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:25:50.050560  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:50.176106  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:25:50.192829  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.2
	I1014 20:25:50.192902  594320 certs.go:195] generating shared ca certs ...
	I1014 20:25:50.192934  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:50.193147  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:25:50.193245  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:25:50.193272  594320 certs.go:257] generating profile certs ...
	I1014 20:25:50.193429  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:25:50.193492  594320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1
	I1014 20:25:50.193549  594320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1014 20:25:51.264129  594320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 ...
	I1014 20:25:51.264211  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1: {Name:mk888fb5869312f9302bb4d116c9798036158a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264486  594320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 ...
	I1014 20:25:51.264521  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1: {Name:mk37f1a7286bb5552e328ef092b8b679d52c9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:51.264668  594320 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt
	I1014 20:25:51.264858  594320 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.097f6be1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key
	I1014 20:25:51.265044  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:25:51.265080  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:25:51.265114  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:25:51.265162  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:25:51.265197  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:25:51.265228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:25:51.265275  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:25:51.265318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:25:51.265346  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:25:51.265429  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:25:51.265484  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:25:51.265514  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:25:51.265568  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:25:51.265617  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:25:51.265670  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:25:51.265740  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:51.265797  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.266224  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.266279  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.266869  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:25:51.293705  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:25:51.326310  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:25:51.350664  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:25:51.384349  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:25:51.428733  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:25:51.452875  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:25:51.478413  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:25:51.512863  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:25:51.549423  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:25:51.576302  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:25:51.612732  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:25:51.637495  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:25:51.649332  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:25:51.668042  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674160  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.674241  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:25:51.728425  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:25:51.739014  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:25:51.750524  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757453  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.757551  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:25:51.825539  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:25:51.837092  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:25:51.848450  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853823  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.853915  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:25:51.903510  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:25:51.914011  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:25:51.919570  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:25:51.968390  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:25:52.020296  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:25:52.082590  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:25:52.139873  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:25:52.202457  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:25:52.250943  594320 kubeadm.go:400] StartCluster: {Name:ha-378189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:25:52.251111  594320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:25:52.251193  594320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:25:52.305687  594320 cri.go:89] found id: "a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59"
	I1014 20:25:52.305737  594320 cri.go:89] found id: "29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	I1014 20:25:52.305743  594320 cri.go:89] found id: "6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b"
	I1014 20:25:52.305748  594320 cri.go:89] found id: "1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42"
	I1014 20:25:52.305751  594320 cri.go:89] found id: "e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa"
	I1014 20:25:52.305756  594320 cri.go:89] found id: ""
	I1014 20:25:52.305830  594320 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 20:25:52.324825  594320 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:25:52Z" level=error msg="open /run/runc: no such file or directory"
	I1014 20:25:52.324939  594320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:25:52.340043  594320 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:25:52.340101  594320 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:25:52.340184  594320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:25:52.356489  594320 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:25:52.357142  594320 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-378189" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.357314  594320 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "ha-378189" cluster setting kubeconfig missing "ha-378189" context setting]
	I1014 20:25:52.357700  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.358694  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:25:52.359335  594320 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:25:52.359405  594320 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:25:52.359415  594320 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:25:52.359420  594320 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:25:52.359425  594320 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:25:52.359381  594320 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:25:52.359974  594320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:25:52.376163  594320 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:25:52.376206  594320 kubeadm.go:601] duration metric: took 36.093342ms to restartPrimaryControlPlane
	I1014 20:25:52.376215  594320 kubeadm.go:402] duration metric: took 125.292882ms to StartCluster
	I1014 20:25:52.376231  594320 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.376308  594320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:25:52.377241  594320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:25:52.377523  594320 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:25:52.377556  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:25:52.377564  594320 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:25:52.378287  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.384056  594320 out.go:179] * Enabled addons: 
	I1014 20:25:52.386972  594320 addons.go:514] duration metric: took 9.377075ms for enable addons: enabled=[]
	I1014 20:25:52.387045  594320 start.go:246] waiting for cluster config update ...
	I1014 20:25:52.387060  594320 start.go:255] writing updated cluster config ...
	I1014 20:25:52.390584  594320 out.go:203] 
	I1014 20:25:52.393809  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:52.393941  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.397423  594320 out.go:179] * Starting "ha-378189-m02" control-plane node in "ha-378189" cluster
	I1014 20:25:52.400301  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:25:52.403231  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:25:52.406009  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:25:52.406048  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:25:52.406156  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:25:52.406172  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:25:52.406316  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:52.406581  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:25:52.433409  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:25:52.433441  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:25:52.433455  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:25:52.433497  594320 start.go:360] acquireMachinesLock for ha-378189-m02: {Name:mk92416bb80d0c49acbc35d0990b125674d2e01f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:25:52.433567  594320 start.go:364] duration metric: took 39.312µs to acquireMachinesLock for "ha-378189-m02"
	I1014 20:25:52.433592  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:25:52.433606  594320 fix.go:54] fixHost starting: m02
	I1014 20:25:52.433897  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.466174  594320 fix.go:112] recreateIfNeeded on ha-378189-m02: state=Stopped err=<nil>
	W1014 20:25:52.466206  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:25:52.469507  594320 out.go:252] * Restarting existing docker container for "ha-378189-m02" ...
	I1014 20:25:52.469612  594320 cli_runner.go:164] Run: docker start ha-378189-m02
	I1014 20:25:52.947382  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:25:52.976894  594320 kic.go:430] container "ha-378189-m02" state is running.
	I1014 20:25:52.977343  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:53.015718  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:25:53.015986  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:25:53.016064  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:53.055604  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:53.055958  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:53.055970  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:25:53.056832  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56096->127.0.0.1:33552: read: connection reset by peer
	I1014 20:25:56.232015  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.232096  594320 ubuntu.go:182] provisioning hostname "ha-378189-m02"
	I1014 20:25:56.232187  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.268557  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.268869  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.268879  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m02 && echo "ha-378189-m02" | sudo tee /etc/hostname
	I1014 20:25:56.465757  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m02
	
	I1014 20:25:56.465917  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:56.491913  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:56.492225  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:56.492241  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:25:56.672985  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:25:56.673014  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:25:56.673042  594320 ubuntu.go:190] setting up certificates
	I1014 20:25:56.673051  594320 provision.go:84] configureAuth start
	I1014 20:25:56.673146  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:56.705878  594320 provision.go:143] copyHostCerts
	I1014 20:25:56.705922  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.705956  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:25:56.705963  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:25:56.706040  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:25:56.706163  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706183  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:25:56.706187  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:25:56.706214  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:25:56.706263  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706278  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:25:56.706282  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:25:56.706307  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:25:56.706362  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m02 san=[127.0.0.1 192.168.49.3 ha-378189-m02 localhost minikube]
	I1014 20:25:57.297271  594320 provision.go:177] copyRemoteCerts
	I1014 20:25:57.297355  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:25:57.297408  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.315464  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:57.428975  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:25:57.429037  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:25:57.452816  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:25:57.452885  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:25:57.471294  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:25:57.471361  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:25:57.489442  594320 provision.go:87] duration metric: took 816.37455ms to configureAuth
	I1014 20:25:57.489480  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:25:57.489743  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:25:57.489867  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.514270  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:25:57.514577  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1014 20:25:57.514591  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:25:57.886060  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:25:57.886080  594320 machine.go:96] duration metric: took 4.870076381s to provisionDockerMachine
	I1014 20:25:57.886091  594320 start.go:293] postStartSetup for "ha-378189-m02" (driver="docker")
	I1014 20:25:57.886102  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:25:57.886163  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:25:57.886208  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:57.904127  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.005798  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:25:58.011149  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:25:58.011183  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:25:58.011195  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:25:58.011261  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:25:58.011342  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:25:58.011354  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:25:58.011454  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:25:58.019728  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:25:58.038734  594320 start.go:296] duration metric: took 152.626218ms for postStartSetup
	I1014 20:25:58.038827  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:25:58.038875  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.056476  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.157712  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:25:58.162651  594320 fix.go:56] duration metric: took 5.729042528s for fixHost
	I1014 20:25:58.162676  594320 start.go:83] releasing machines lock for "ha-378189-m02", held for 5.729094813s
	I1014 20:25:58.162747  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m02
	I1014 20:25:58.183647  594320 out.go:179] * Found network options:
	I1014 20:25:58.186478  594320 out.go:179]   - NO_PROXY=192.168.49.2
	W1014 20:25:58.189324  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:25:58.189372  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:25:58.189452  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:25:58.189507  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.189778  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:25:58.189920  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m02
	I1014 20:25:58.209437  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.218677  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m02/id_rsa Username:docker}
	I1014 20:25:58.415461  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:25:58.420177  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:25:58.420296  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:25:58.428278  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:25:58.428300  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:25:58.428405  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:25:58.428468  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:25:58.444844  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:25:58.457892  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:25:58.457983  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:25:58.475762  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:25:58.489635  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:25:58.621200  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:25:58.796095  594320 docker.go:234] disabling docker service ...
	I1014 20:25:58.796209  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:25:58.827751  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:25:58.844957  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:25:59.096618  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:25:59.311841  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:25:59.331768  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:25:59.354924  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:25:59.355038  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.367271  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:25:59.367384  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.382002  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.395622  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.410728  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:25:59.426726  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.444708  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.458277  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:25:59.469637  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:25:59.481336  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:25:59.491928  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:25:59.720255  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:25:59.958409  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:25:59.958540  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:25:59.962293  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:25:59.962423  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:25:59.966088  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:25:59.991722  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:25:59.991842  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.113230  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:26:00.165429  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:26:00.170886  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:26:00.174530  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:26:00.218865  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:26:00.228535  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:00.249249  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:26:00.249528  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:00.249874  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:26:00.286648  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:26:00.286974  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.3
	I1014 20:26:00.287300  594320 certs.go:195] generating shared ca certs ...
	I1014 20:26:00.287327  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:26:00.287479  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:26:00.287528  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:26:00.287537  594320 certs.go:257] generating profile certs ...
	I1014 20:26:00.287629  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:26:00.287704  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.31763e79
	I1014 20:26:00.287740  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:26:00.287749  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:26:00.287763  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:26:00.287775  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:26:00.287791  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:26:00.287805  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:26:00.287818  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:26:00.287829  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:26:00.287851  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:26:00.287914  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:26:00.287952  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:26:00.287961  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:26:00.287988  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:26:00.288014  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:26:00.288040  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:26:00.288093  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:00.288127  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.288141  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.288152  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.288213  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:26:00.323762  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:26:00.428739  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:26:00.433364  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:26:00.442968  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:26:00.447387  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:26:00.457017  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:26:00.461913  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:26:00.472168  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:26:00.476286  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:26:00.486124  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:26:00.490540  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:26:00.499485  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:26:00.503069  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:26:00.511944  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:26:00.532142  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:26:00.549917  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:26:00.568202  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:26:00.586156  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:26:00.603803  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:26:00.621171  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:26:00.638805  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:26:00.656530  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:26:00.674792  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:26:00.693738  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:26:00.710998  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:26:00.723268  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:26:00.736291  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:26:00.748826  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:26:00.764190  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:26:00.786939  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:26:00.807509  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:26:00.822235  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:26:00.829716  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:26:00.839312  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844086  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.844196  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:26:00.887371  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:26:00.895171  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:26:00.903441  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907140  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.907258  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:26:00.947880  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:26:00.955732  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:26:00.964076  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967857  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:00.967921  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:26:01.009173  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:26:01.017414  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:26:01.021430  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:26:01.062823  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:26:01.103976  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:26:01.145434  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:26:01.188131  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:26:01.229876  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:26:01.271831  594320 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1014 20:26:01.271953  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:26:01.271983  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:26:01.272034  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:26:01.286472  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:26:01.286553  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:26:01.286643  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:26:01.295904  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:26:01.296015  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:26:01.304007  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:26:01.318122  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:26:01.331599  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:26:01.348453  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:26:01.353197  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:26:01.364628  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.509428  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.524168  594320 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:26:01.524671  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:01.530763  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:26:01.533842  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:01.682729  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:26:01.697778  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:26:01.697855  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:26:01.698087  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m02" to be "Ready" ...
	W1014 20:26:11.701290  594320 node_ready.go:55] error getting node "ha-378189-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02": net/http: TLS handshake timeout
	I1014 20:26:15.977480  594320 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-378189-m02"
	I1014 20:26:19.960495  594320 node_ready.go:49] node "ha-378189-m02" is "Ready"
	I1014 20:26:19.960524  594320 node_ready.go:38] duration metric: took 18.262413251s for node "ha-378189-m02" to be "Ready" ...
	I1014 20:26:19.960537  594320 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:26:19.960604  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.461531  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:20.961464  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.461540  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:21.961480  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.461610  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.961486  594320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:22.977774  594320 api_server.go:72] duration metric: took 21.453514103s to wait for apiserver process to appear ...
	I1014 20:26:22.977840  594320 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:26:22.977876  594320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 20:26:22.986624  594320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 20:26:22.988140  594320 api_server.go:141] control plane version: v1.34.1
	I1014 20:26:22.988198  594320 api_server.go:131] duration metric: took 10.336359ms to wait for apiserver health ...
	I1014 20:26:22.988222  594320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:26:23.017914  594320 system_pods.go:59] 26 kube-system pods found
	I1014 20:26:23.018013  594320 system_pods.go:61] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018041  594320 system_pods.go:61] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.018081  594320 system_pods.go:61] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.018111  594320 system_pods.go:61] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.018134  594320 system_pods.go:61] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.018156  594320 system_pods.go:61] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.018187  594320 system_pods.go:61] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.018213  594320 system_pods.go:61] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.018237  594320 system_pods.go:61] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.018263  594320 system_pods.go:61] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.018296  594320 system_pods.go:61] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.018325  594320 system_pods.go:61] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.018353  594320 system_pods.go:61] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.018378  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.018412  594320 system_pods.go:61] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.018440  594320 system_pods.go:61] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.018463  594320 system_pods.go:61] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.018488  594320 system_pods.go:61] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.018518  594320 system_pods.go:61] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.018546  594320 system_pods.go:61] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.018569  594320 system_pods.go:61] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.018590  594320 system_pods.go:61] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.018622  594320 system_pods.go:61] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.018648  594320 system_pods.go:61] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.018672  594320 system_pods.go:61] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.018696  594320 system_pods.go:61] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.018729  594320 system_pods.go:74] duration metric: took 30.48728ms to wait for pod list to return data ...
	I1014 20:26:23.018756  594320 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:26:23.026171  594320 default_sa.go:45] found service account: "default"
	I1014 20:26:23.026235  594320 default_sa.go:55] duration metric: took 7.458111ms for default service account to be created ...
	I1014 20:26:23.026258  594320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:26:23.039389  594320 system_pods.go:86] 26 kube-system pods found
	I1014 20:26:23.039477  594320 system_pods.go:89] "coredns-66bc5c9577-fqrq6" [2464884d-50bb-485c-827e-c58495af789d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039504  594320 system_pods.go:89] "coredns-66bc5c9577-rdfdx" [5150fb32-65b6-41fa-b82c-e867a53c8158] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:26:23.039543  594320 system_pods.go:89] "etcd-ha-378189" [cbc1d094-bd19-4f9a-bcd9-873a3cb9b55b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:26:23.039568  594320 system_pods.go:89] "etcd-ha-378189-m02" [594ab6b1-12e7-4159-9682-e9fa603c5d52] Running
	I1014 20:26:23.039589  594320 system_pods.go:89] "etcd-ha-378189-m03" [b76bfaaf-3366-477b-94da-75c707626766] Running
	I1014 20:26:23.039615  594320 system_pods.go:89] "kindnet-54mnm" [564ecf63-fb19-489c-a805-d20f44a94e83] Running
	I1014 20:26:23.039649  594320 system_pods.go:89] "kindnet-k6rbj" [f8b77dee-4c1f-4dec-88f8-5743851b6af0] Running
	I1014 20:26:23.039674  594320 system_pods.go:89] "kindnet-qjxhf" [5ccc29da-500f-4196-a88e-2cf74b77ab1c] Running
	I1014 20:26:23.039695  594320 system_pods.go:89] "kindnet-xs5v9" [fcdfe744-7240-4d2c-887b-3ace1aaf041d] Running
	I1014 20:26:23.039724  594320 system_pods.go:89] "kube-apiserver-ha-378189" [4f4fdbc4-daf1-4540-9355-eec38bcb2686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:26:23.039755  594320 system_pods.go:89] "kube-apiserver-ha-378189-m02" [8efb89f9-62e4-434c-8cf9-90396bbb1186] Running
	I1014 20:26:23.039784  594320 system_pods.go:89] "kube-apiserver-ha-378189-m03" [241641d1-f1d5-4637-86b3-4bcc9863e83b] Running
	I1014 20:26:23.039823  594320 system_pods.go:89] "kube-controller-manager-ha-378189" [b711c7e1-975a-46b7-b60a-319cf8348e0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:26:23.039845  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m02" [088feae1-1150-4c63-bba2-2d0ee8946f9a] Running
	I1014 20:26:23.039882  594320 system_pods.go:89] "kube-controller-manager-ha-378189-m03" [be5f778e-22e5-4466-a47c-5bc10dd2db5c] Running
	I1014 20:26:23.039900  594320 system_pods.go:89] "kube-proxy-2cc9g" [bf08b638-392a-4fe7-b1d6-f1a954657426] Running
	I1014 20:26:23.039921  594320 system_pods.go:89] "kube-proxy-dkq7w" [bf1cc632-a47e-4253-8cc8-5b38f7bd7add] Running
	I1014 20:26:23.039955  594320 system_pods.go:89] "kube-proxy-qhldf" [1394e466-3ef9-41b3-9fec-0f25b4e85177] Running
	I1014 20:26:23.039975  594320 system_pods.go:89] "kube-proxy-vcfj7" [c29665fb-40d6-4509-be63-bcb379888e1a] Running
	I1014 20:26:23.039993  594320 system_pods.go:89] "kube-scheduler-ha-378189" [843e7bc6-df65-4d7f-ac9a-c64da55abe53] Running
	I1014 20:26:23.040012  594320 system_pods.go:89] "kube-scheduler-ha-378189-m02" [d4c1e5ac-f2ef-4267-bf35-77bae0238d52] Running
	I1014 20:26:23.040031  594320 system_pods.go:89] "kube-scheduler-ha-378189-m03" [d8dfc00b-b2b4-42b7-90cf-7dc44e9e701c] Running
	I1014 20:26:23.040060  594320 system_pods.go:89] "kube-vip-ha-378189" [3a526d2d-ff79-4c9c-9026-adda8cfe01ed] Running
	I1014 20:26:23.040088  594320 system_pods.go:89] "kube-vip-ha-378189-m02" [4671eaeb-fa41-4d07-b180-6230480daa01] Running
	I1014 20:26:23.040111  594320 system_pods.go:89] "kube-vip-ha-378189-m03" [66eac993-126e-415c-bd2a-93d8afded195] Running
	I1014 20:26:23.040136  594320 system_pods.go:89] "storage-provisioner" [6cd440a2-0b51-4f78-8285-48bf107cf288] Running
	I1014 20:26:23.040170  594320 system_pods.go:126] duration metric: took 13.891803ms to wait for k8s-apps to be running ...
	I1014 20:26:23.040201  594320 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:26:23.040293  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:26:23.063946  594320 system_svc.go:56] duration metric: took 23.73711ms WaitForService to wait for kubelet
	I1014 20:26:23.064017  594320 kubeadm.go:586] duration metric: took 21.539761018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:26:23.064051  594320 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:26:23.082262  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082345  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082370  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082388  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082423  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082450  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082472  594320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 20:26:23.082494  594320 node_conditions.go:123] node cpu capacity is 2
	I1014 20:26:23.082525  594320 node_conditions.go:105] duration metric: took 18.445212ms to run NodePressure ...
	I1014 20:26:23.082553  594320 start.go:241] waiting for startup goroutines ...
	I1014 20:26:23.082589  594320 start.go:255] writing updated cluster config ...
	I1014 20:26:23.086128  594320 out.go:203] 
	I1014 20:26:23.089286  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:23.089471  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.092927  594320 out.go:179] * Starting "ha-378189-m03" control-plane node in "ha-378189" cluster
	I1014 20:26:23.096535  594320 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:26:23.099560  594320 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:26:23.102321  594320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:26:23.102371  594320 cache.go:58] Caching tarball of preloaded images
	I1014 20:26:23.102397  594320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:26:23.102501  594320 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 20:26:23.102517  594320 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:26:23.102654  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.136476  594320 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:26:23.136495  594320 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:26:23.136508  594320 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:26:23.136531  594320 start.go:360] acquireMachinesLock for ha-378189-m03: {Name:mk212b55d7ba2de9e9f1edd93e6a58b4ed6f2ea7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:26:23.136583  594320 start.go:364] duration metric: took 37.113µs to acquireMachinesLock for "ha-378189-m03"
	I1014 20:26:23.136602  594320 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:26:23.136607  594320 fix.go:54] fixHost starting: m03
	I1014 20:26:23.136861  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.180150  594320 fix.go:112] recreateIfNeeded on ha-378189-m03: state=Stopped err=<nil>
	W1014 20:26:23.180176  594320 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:26:23.183408  594320 out.go:252] * Restarting existing docker container for "ha-378189-m03" ...
	I1014 20:26:23.183496  594320 cli_runner.go:164] Run: docker start ha-378189-m03
	I1014 20:26:23.583316  594320 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:26:23.617146  594320 kic.go:430] container "ha-378189-m03" state is running.
	I1014 20:26:23.617687  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:23.654101  594320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/config.json ...
	I1014 20:26:23.654351  594320 machine.go:93] provisionDockerMachine start ...
	I1014 20:26:23.654417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:23.679159  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:23.679663  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:23.679677  594320 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:26:23.680477  594320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 20:26:26.929139  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:26.929205  594320 ubuntu.go:182] provisioning hostname "ha-378189-m03"
	I1014 20:26:26.929306  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:26.953169  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:26.953484  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:26.953497  594320 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-378189-m03 && echo "ha-378189-m03" | sudo tee /etc/hostname
	I1014 20:26:27.366006  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-378189-m03
	
	I1014 20:26:27.366194  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:27.395992  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:27.396293  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:27.396309  594320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-378189-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-378189-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-378189-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:26:27.693852  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:26:27.693921  594320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 20:26:27.693958  594320 ubuntu.go:190] setting up certificates
	I1014 20:26:27.694000  594320 provision.go:84] configureAuth start
	I1014 20:26:27.694087  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:27.718087  594320 provision.go:143] copyHostCerts
	I1014 20:26:27.718133  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718165  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 20:26:27.718172  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 20:26:27.718249  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 20:26:27.718323  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718348  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 20:26:27.718353  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 20:26:27.718380  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 20:26:27.718424  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718444  594320 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 20:26:27.718452  594320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 20:26:27.718478  594320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 20:26:27.718527  594320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.ha-378189-m03 san=[127.0.0.1 192.168.49.4 ha-378189-m03 localhost minikube]
	I1014 20:26:28.885539  594320 provision.go:177] copyRemoteCerts
	I1014 20:26:28.885609  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:26:28.885857  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:28.909377  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:29.092351  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:26:29.092437  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:26:29.150396  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:26:29.150575  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:26:29.212427  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:26:29.212488  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:26:29.291296  594320 provision.go:87] duration metric: took 1.597262612s to configureAuth
	I1014 20:26:29.291319  594320 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:26:29.291554  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:29.291658  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.325450  594320 main.go:141] libmachine: Using SSH client type: native
	I1014 20:26:29.325769  594320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1014 20:26:29.325785  594320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:26:29.937523  594320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:26:29.937544  594320 machine.go:96] duration metric: took 6.283184743s to provisionDockerMachine
	I1014 20:26:29.937556  594320 start.go:293] postStartSetup for "ha-378189-m03" (driver="docker")
	I1014 20:26:29.937573  594320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:26:29.937654  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:26:29.937717  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:29.965387  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.103245  594320 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:26:30.108041  594320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:26:30.108068  594320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:26:30.108080  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 20:26:30.108144  594320 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 20:26:30.108221  594320 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 20:26:30.108228  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /etc/ssl/certs/5469102.pem
	I1014 20:26:30.108351  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:26:30.118723  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:26:30.147273  594320 start.go:296] duration metric: took 209.701827ms for postStartSetup
	I1014 20:26:30.147363  594320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:26:30.147417  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.167399  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.286490  594320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:26:30.294616  594320 fix.go:56] duration metric: took 7.158001675s for fixHost
	I1014 20:26:30.294649  594320 start.go:83] releasing machines lock for "ha-378189-m03", held for 7.158058317s
	I1014 20:26:30.294720  594320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:26:30.332338  594320 out.go:179] * Found network options:
	I1014 20:26:30.335270  594320 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1014 20:26:30.338218  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338252  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338276  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	W1014 20:26:30.338286  594320 proxy.go:120] fail to check proxy env: Error ip not in block
	I1014 20:26:30.338361  594320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:26:30.338399  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.338666  594320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:26:30.338727  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:26:30.371445  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.378598  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:26:30.685955  594320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:26:30.710071  594320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:26:30.710156  594320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:26:30.762990  594320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:26:30.763023  594320 start.go:495] detecting cgroup driver to use...
	I1014 20:26:30.763054  594320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 20:26:30.763113  594320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:26:30.789796  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:26:30.813445  594320 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:26:30.813530  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:26:30.839481  594320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:26:30.861859  594320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:26:31.219302  594320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:26:31.622468  594320 docker.go:234] disabling docker service ...
	I1014 20:26:31.622554  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:26:31.669472  594320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:26:31.699352  594320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:26:32.097354  594320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:26:32.482606  594320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:26:32.523674  594320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:26:32.577860  594320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:26:32.577931  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.625402  594320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:26:32.625482  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.642162  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.655785  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.690980  594320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:26:32.710947  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.738596  594320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.764228  594320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:26:32.794648  594320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:26:32.819907  594320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:26:32.841183  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:26:33.263701  594320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:28:03.725730  594320 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.461946041s)
	I1014 20:28:03.725755  594320 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:28:03.725818  594320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:28:03.730985  594320 start.go:563] Will wait 60s for crictl version
	I1014 20:28:03.731051  594320 ssh_runner.go:195] Run: which crictl
	I1014 20:28:03.734613  594320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:28:03.777080  594320 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:28:03.777171  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.808074  594320 ssh_runner.go:195] Run: crio --version
	I1014 20:28:03.842942  594320 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:28:03.845779  594320 out.go:179]   - env NO_PROXY=192.168.49.2
	I1014 20:28:03.848732  594320 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1014 20:28:03.851762  594320 cli_runner.go:164] Run: docker network inspect ha-378189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:28:03.869115  594320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:28:03.872847  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:03.883220  594320 mustload.go:65] Loading cluster: ha-378189
	I1014 20:28:03.883459  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:03.883722  594320 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:28:03.902527  594320 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:28:03.902796  594320 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189 for IP: 192.168.49.4
	I1014 20:28:03.902811  594320 certs.go:195] generating shared ca certs ...
	I1014 20:28:03.902825  594320 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:28:03.902952  594320 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 20:28:03.902998  594320 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 20:28:03.903010  594320 certs.go:257] generating profile certs ...
	I1014 20:28:03.903083  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key
	I1014 20:28:03.903150  594320 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key.02897361
	I1014 20:28:03.903194  594320 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key
	I1014 20:28:03.903292  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:28:03.903318  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:28:03.903337  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:28:03.903356  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:28:03.903373  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:28:03.903388  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:28:03.903405  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:28:03.903422  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:28:03.903477  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 20:28:03.903513  594320 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 20:28:03.903525  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 20:28:03.903550  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:28:03.903577  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:28:03.903603  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 20:28:03.903647  594320 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 20:28:03.903678  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:03.903693  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem -> /usr/share/ca-certificates/546910.pem
	I1014 20:28:03.903709  594320 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> /usr/share/ca-certificates/5469102.pem
	I1014 20:28:03.903766  594320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:28:03.921249  594320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:28:04.020701  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 20:28:04.026586  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 20:28:04.037048  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 20:28:04.040988  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 20:28:04.050562  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 20:28:04.054508  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 20:28:04.063039  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 20:28:04.066687  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1014 20:28:04.075046  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 20:28:04.079077  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 20:28:04.087651  594320 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 20:28:04.091714  594320 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 20:28:04.099957  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:28:04.120044  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:28:04.141157  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:28:04.160012  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 20:28:04.182538  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 20:28:04.206993  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:28:04.231475  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:28:04.253537  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:28:04.275116  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:28:04.294127  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 20:28:04.313010  594320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 20:28:04.337638  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 20:28:04.350377  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 20:28:04.365979  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 20:28:04.380227  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1014 20:28:04.402519  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 20:28:04.428480  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 20:28:04.444300  594320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 20:28:04.465409  594320 ssh_runner.go:195] Run: openssl version
	I1014 20:28:04.472158  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:28:04.481746  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.485919  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.486029  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:28:04.534226  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:28:04.542709  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 20:28:04.552210  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.557889  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.558010  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 20:28:04.600600  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 20:28:04.608591  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 20:28:04.617463  594320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621083  594320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.621172  594320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 20:28:04.663633  594320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:28:04.671866  594320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:28:04.675585  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:28:04.717923  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:28:04.759364  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:28:04.801264  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:28:04.842553  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:28:04.888303  594320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:28:04.930552  594320 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1014 20:28:04.930687  594320 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-378189-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-378189 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:28:04.930727  594320 kube-vip.go:115] generating kube-vip config ...
	I1014 20:28:04.930788  594320 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:28:04.948125  594320 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:28:04.948234  594320 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:28:04.948301  594320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:28:04.956444  594320 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:28:04.956519  594320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 20:28:04.964479  594320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 20:28:04.980043  594320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:28:04.994905  594320 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1014 20:28:05.014277  594320 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:28:05.019048  594320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:28:05.032269  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.165397  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.183614  594320 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:28:05.184053  594320 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:28:05.189131  594320 out.go:179] * Verifying Kubernetes components...
	I1014 20:28:05.192051  594320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:28:05.332393  594320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:28:05.346845  594320 kapi.go:59] client config for ha-378189: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/ha-378189/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 20:28:05.346924  594320 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1014 20:28:05.347150  594320 node_ready.go:35] waiting up to 6m0s for node "ha-378189-m03" to be "Ready" ...
	W1014 20:28:07.351440  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:09.352350  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:11.851687  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:13.851750  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:16.351273  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:18.351732  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:20.354256  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:22.851172  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:24.851281  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:27.351420  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:29.851348  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:32.351357  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:34.851229  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:36.852278  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:39.352122  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:41.851851  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:44.351532  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:46.850950  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:49.352299  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:51.850817  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:53.851323  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:56.350480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:28:58.351469  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:00.361289  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:02.851458  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:05.350476  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:07.350815  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:09.351365  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:11.851059  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:13.851563  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:16.351339  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:18.850245  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:20.855138  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:23.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:25.852768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:28.351284  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:30.351537  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:32.851539  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:35.351487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:37.850912  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:39.851077  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:41.851598  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:44.351457  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:46.351534  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:48.850734  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:50.851231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:52.851432  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:55.351286  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:57.351688  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:29:59.851092  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:01.852021  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:04.350799  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:06.851328  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:09.351536  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:11.353512  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:13.851897  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:15.852000  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:18.351490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:20.851616  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:23.351164  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:25.351400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:27.352193  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:29.850751  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:31.851102  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:34.350486  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:36.351583  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:38.851490  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:41.350742  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:43.357724  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:45.850854  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:47.851206  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:50.350789  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:52.850626  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:54.851276  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:57.350761  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:30:59.351538  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:01.851391  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:04.350763  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:06.850386  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:08.851237  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:11.351648  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:13.850996  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:15.851345  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:18.351324  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:20.355277  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:22.851213  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:25.350735  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:27.850574  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:29.856637  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:32.350493  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:34.351300  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:36.353689  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:38.851719  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:40.852010  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:43.351491  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:45.354360  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:47.851118  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:49.851408  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:52.351137  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:54.851402  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:57.350516  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:31:59.351590  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:01.851121  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:04.350628  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:06.350691  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:08.351228  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:10.851190  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:13.351117  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:15.850576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:17.851400  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:20.351576  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:22.851057  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:25.351322  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:27.355638  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:29.851145  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:31.851487  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:34.352259  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:36.850522  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:39.351831  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:41.850768  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:44.350717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:46.351296  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:48.850913  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:50.851023  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:53.350835  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:55.353231  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:32:57.851446  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:00.354501  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:02.850997  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:05.350786  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:07.350850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:09.351182  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:11.850869  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:13.851809  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:16.351480  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:18.850797  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:20.851760  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:23.351587  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:25.850979  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:28.351624  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:30.851397  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:33.351390  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:35.351573  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:37.850906  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:40.351625  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:42.852099  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:45.352080  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:47.850966  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:49.851115  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:51.851586  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:54.350406  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:56.350741  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:33:58.850717  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:00.850850  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	W1014 20:34:02.851136  594320 node_ready.go:57] node "ha-378189-m03" has "Ready":"Unknown" status (will retry)
	I1014 20:34:05.347342  594320 node_ready.go:38] duration metric: took 6m0.000162726s for node "ha-378189-m03" to be "Ready" ...
	I1014 20:34:05.350610  594320 out.go:203] 
	W1014 20:34:05.353525  594320 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:34:05.353550  594320 out.go:285] * 
	W1014 20:34:05.355696  594320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:34:05.358382  594320 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:21 ha-378189 crio[666]: time="2025-10-14T20:26:21.589653552Z" level=info msg="Started container" PID=1152 containerID=b52d43b08a0f0f3d58627b50e98ad91edb2949ba45cac9addb1a701ea81a7ab6 description=default/busybox-7b57f96db7-q27pj/busybox id=3a27ea99-be07-47d1-af55-67ce0d5512da name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6cb5d57424988afbfa1dc5617cd0c429ceb241689cd63972a803390cece10e9
	Oct 14 20:26:52 ha-378189 conmon[1156]: conmon 363f25aac2f0263c8db4 <ninfo>: container 1159 exited with status 1
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.625980521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fe72c90-cb4c-4e12-b9ae-04fb5a121866 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.627066339Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fbaeb554-6cf4-4d97-8ba3-e4435db48ee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628061243Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.628282768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635415286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.635711348Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/passwd: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.63580049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f62ccfbf6b161947a7cc74fc5dcf3603e4dda48bfb3e96ddbcd70febf8339654/merged/etc/group: no such file or directory"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.636120757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.66493445Z" level=info msg="Created container 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313: kube-system/storage-provisioner/storage-provisioner" id=cdfab445-4fce-47fd-8f74-84d39f5dc931 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.666015739Z" level=info msg="Starting container: 80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313" id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 20:26:52 ha-378189 crio[666]: time="2025-10-14T20:26:52.669895504Z" level=info msg="Started container" PID=1401 containerID=80de2a13d9d5ba8f0e3d5095bcabb43ad2fd175b927888fd1a8d084399c07313 description=kube-system/storage-provisioner/storage-provisioner id=6b17f4e7-453e-48f0-a667-21e3e6b072ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=0baae86d64cea1e60128930d082a0936798e1ae0fa7d9cc62159fa6df0e1584f
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.588845238Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592539237Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592598881Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.592618352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59592804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.595974687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.59599566Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599034558Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599072967Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.599095368Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602072653Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 20:27:01 ha-378189 crio[666]: time="2025-10-14T20:27:01.602106582Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	80de2a13d9d5b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   0baae86d64cea       storage-provisioner                 kube-system
	ada8dd32f25e1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   c1d0c1569b9ee       coredns-66bc5c9577-rdfdx            kube-system
	626c2287b0728       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   7d46cbab6384c       coredns-66bc5c9577-fqrq6            kube-system
	363f25aac2f02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   0baae86d64cea       storage-provisioner                 kube-system
	1e6277796c71e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   192c9d36b9c8a       kube-proxy-vcfj7                    kube-system
	b52d43b08a0f0       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d6cb5d5742498       busybox-7b57f96db7-q27pj            default
	19b8d6e7d9d03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   bfe53b6aba234       kindnet-xs5v9                       kube-system
	d232b32e38df9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	750164802f535       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            2                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	a140acf9afcdd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   0c4960be5b852       kube-apiserver-ha-378189            kube-system
	29d51a80b14a0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   a4b4e27bd77f9       kube-controller-manager-ha-378189   kube-system
	6eb0610f607e1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   d05d495ee5973       kube-vip-ha-378189                  kube-system
	1df4877419cf7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   0e3b61d4f5605       kube-scheduler-ha-378189            kube-system
	e43d3be1d91f3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   eaed44524556d       etcd-ha-378189                      kube-system
	
	
	==> coredns [626c2287b07287457eac7f6e1b44647d8632ec7bb0a670aa408f969a93c332af] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53276 - 13110 "HINFO IN 6427901219273669792.4527693482397380301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022504393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ada8dd32f25e1279de4ffb5acc3114c55c10d41b5d58fa972ded2662fadec5c9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40316 - 29901 "HINFO IN 5531300076892638066.4168205942486472486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013682922s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-378189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:19:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:34:08 +0000   Tue, 14 Oct 2025 20:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-378189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0978d9c8ec5d4898bfbd451e1fb9d4bc
	  System UUID:                3bbab8a9-98e5-49cd-8237-de9f16fcba7a
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q27pj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-fqrq6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-rdfdx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-378189                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-xs5v9                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-378189             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-378189    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-vcfj7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-378189             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-378189                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m54s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-378189 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   Starting                 8m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m28s (x8 over 8m28s)  kubelet          Node ha-378189 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m28s (x8 over 8m28s)  kubelet          Node ha-378189 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m28s (x8 over 8m28s)  kubelet          Node ha-378189 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m50s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-378189 event: Registered Node ha-378189 in Controller
	
	
	Name:               ha-378189-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_20_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:33:39 +0000   Tue, 14 Oct 2025 20:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-378189-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 efd151379e5c4b2ebd4e80e64c360eed
	  System UUID:                3e1bc125-e145-47d1-8100-88ab5dc3051b
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tkxmq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-378189-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qjxhf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-378189-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-378189-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-2cc9g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-378189-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-378189-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m41s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   NodeHasSufficientPID     9m36s (x8 over 9m36s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m36s (x8 over 9m36s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m36s (x8 over 9m36s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             9m2s                   node-controller  Node ha-378189-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m58s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   Starting                 8m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-378189-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m25s)  kubelet          Node ha-378189-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m51s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-378189-m02 event: Registered Node ha-378189-m02 in Controller
	
	
	Name:               ha-378189-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-378189-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=ha-378189
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_14T20_23_23_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:23:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-378189-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:25:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 14 Oct 2025 20:24:54 +0000   Tue, 14 Oct 2025 20:27:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-378189-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d1e8ea2da5b40c08e7704d42f98e14c
	  System UUID:                a2a231a7-55aa-4186-9e9a-7dad075b6ff1
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54mnm       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-dkq7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-378189-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-378189-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m58s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m51s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  RegisteredNode           7m45s              node-controller  Node ha-378189-m04 event: Registered Node ha-378189-m04 in Controller
	  Normal  NodeNotReady             7m1s               node-controller  Node ha-378189-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +28.158583] overlayfs: idmapped layers are currently not supported
	[Oct14 19:45] overlayfs: idmapped layers are currently not supported
	[Oct14 19:46] overlayfs: idmapped layers are currently not supported
	[Oct14 19:47] overlayfs: idmapped layers are currently not supported
	[ +31.505487] overlayfs: idmapped layers are currently not supported
	[Oct14 19:48] overlayfs: idmapped layers are currently not supported
	[ +35.381153] overlayfs: idmapped layers are currently not supported
	[ +14.062769] overlayfs: idmapped layers are currently not supported
	[Oct14 19:50] overlayfs: idmapped layers are currently not supported
	[Oct14 19:51] overlayfs: idmapped layers are currently not supported
	[Oct14 19:52] overlayfs: idmapped layers are currently not supported
	[  +0.143459] overlayfs: idmapped layers are currently not supported
	[Oct14 19:54] overlayfs: idmapped layers are currently not supported
	[Oct14 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct14 19:59] overlayfs: idmapped layers are currently not supported
	[Oct14 20:05] overlayfs: idmapped layers are currently not supported
	[Oct14 20:06] overlayfs: idmapped layers are currently not supported
	[Oct14 20:19] overlayfs: idmapped layers are currently not supported
	[Oct14 20:20] overlayfs: idmapped layers are currently not supported
	[Oct14 20:21] overlayfs: idmapped layers are currently not supported
	[Oct14 20:23] overlayfs: idmapped layers are currently not supported
	[Oct14 20:24] overlayfs: idmapped layers are currently not supported
	[Oct14 20:25] overlayfs: idmapped layers are currently not supported
	[  +4.162636] overlayfs: idmapped layers are currently not supported
	[Oct14 20:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e43d3be1d91f3e04acb8a0a4ca52d2c72f059654b12a78440dcab6115d7100aa] <==
	{"level":"info","ts":"2025-10-14T20:28:06.190047Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"77cd6ee1d0a2b5a3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-14T20:28:06.190169Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.222402Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:28:06.223226Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"310.017358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-14T20:28:06.604496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77cd6ee1d0a2b5a3","rtt":"309.548378ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-14T20:34:09.523117Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12520427846644105758 12593026477526642892)"}
	{"level":"info","ts":"2025-10-14T20:34:09.525140Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"77cd6ee1d0a2b5a3","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-14T20:34:09.525190Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525251Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525273Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525255Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525373Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525440Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525463Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.525547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525655Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3","error":"context canceled"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525720Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"77cd6ee1d0a2b5a3","error":"failed to read 77cd6ee1d0a2b5a3 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-10-14T20:34:09.525738Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.525960Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3","error":"context canceled"}
	{"level":"info","ts":"2025-10-14T20:34:09.526001Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.526013Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"info","ts":"2025-10-14T20:34:09.526023Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.573763Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"77cd6ee1d0a2b5a3"}
	{"level":"warn","ts":"2025-10-14T20:34:09.582489Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"77cd6ee1d0a2b5a3"}
	
	
	==> kernel <==
	 20:34:19 up  3:16,  0 user,  load average: 2.73, 1.75, 1.69
	Linux ha-378189 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19b8d6e7d9d0342c76a1e404b13de2161ab3c4bf2f293b51851e009a1b5f7f08] <==
	I1014 20:33:41.582564       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:41.582642       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:41.582652       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:33:51.588504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:33:51.588651       1 main.go:301] handling current node
	I1014 20:33:51.588720       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:33:51.588753       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:33:51.588972       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:33:51.589013       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:33:51.589196       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:33:51.589242       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:34:01.588534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:34:01.588567       1 main.go:301] handling current node
	I1014 20:34:01.588586       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:34:01.588593       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:34:01.588751       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1014 20:34:01.588779       1 main.go:324] Node ha-378189-m03 has CIDR [10.244.2.0/24] 
	I1014 20:34:01.588872       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:34:01.588899       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	I1014 20:34:11.582458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1014 20:34:11.582574       1 main.go:301] handling current node
	I1014 20:34:11.582632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1014 20:34:11.582647       1 main.go:324] Node ha-378189-m02 has CIDR [10.244.1.0/24] 
	I1014 20:34:11.582798       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1014 20:34:11.582813       1 main.go:324] Node ha-378189-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [750164802f535723e47d9b886929b01a0d9c918009411e14769f1b1bd030b93a] <==
	I1014 20:26:20.042627       1 policy_source.go:240] refreshing policies
	I1014 20:26:20.079160       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 20:26:20.093566       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 20:26:20.093627       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:26:20.112990       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:26:20.113438       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 20:26:20.113624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:26:20.119480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 20:26:20.119521       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 20:26:20.119613       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 20:26:20.119654       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 20:26:20.121370       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 20:26:20.126606       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 20:26:20.130342       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:26:20.136886       1 cache.go:39] Caches are synced for autoregister controller
	W1014 20:26:20.207177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1014 20:26:20.213176       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:26:20.235225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1014 20:26:20.242018       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1014 20:26:20.523492       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:26:20.826556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 20:26:22.469181       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1014 20:26:28.935001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:26:29.018133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:26:36.974045       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [a140acf9afcdd0e524e513679b658bc7b2dfb9cfc9ef073ace6322510c7f8b59] <==
	I1014 20:25:51.233702       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1014 20:25:54.896784       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1014 20:25:54.896886       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1014 20:25:54.896942       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1014 20:25:54.896973       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1014 20:25:54.897001       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1014 20:25:54.897033       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1014 20:25:54.897063       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1014 20:25:54.897093       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1014 20:25:54.897121       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1014 20:25:54.897149       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1014 20:25:54.897178       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1014 20:25:54.897206       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1014 20:25:54.922112       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1014 20:25:54.935912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1014 20:25:54.937765       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1014 20:25:54.962374       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:25:54.968742       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1014 20:25:54.969218       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 20:25:54.969513       1 instance.go:239] Using reconciler: lease
	W1014 20:25:54.971617       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.921833       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.925037       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1014 20:26:14.970811       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1014 20:26:14.970821       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab] <==
	I1014 20:25:54.406785       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:25:55.132646       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1014 20:25:55.132731       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:25:55.147018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1014 20:25:55.147823       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 20:25:55.148017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:25:55.148140       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1014 20:26:20.022395       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d232b32e38df9f91bd238189f93a91f9a75b3d07dfb77d79c8de6932a7885216] <==
	I1014 20:26:28.618601       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 20:26:28.618752       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m02"
	I1014 20:26:28.618829       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m03"
	I1014 20:26:28.618890       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189-m04"
	I1014 20:26:28.618954       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-378189"
	I1014 20:26:28.604585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.619106       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:26:28.602874       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:26:28.629252       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 20:26:28.602883       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 20:26:28.630235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 20:26:28.602893       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:26:28.630543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 20:26:28.603399       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 20:26:28.556415       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:26:28.637223       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 20:26:28.682627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:26:28.682751       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:26:28.741655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.753923       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:26:28.754530       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:26:28.754582       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:26:28.808506       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1014 20:27:02.690518       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 20:32:20.021441       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-n6bnr"
	
	
	==> kube-proxy [1e6277796c71ea2156b2442bc5bccf388b38d67c931d7ebfd1af22d8148d7f4a] <==
	I1014 20:26:22.368598       1 server_linux.go:53] "Using iptables proxy"
	I1014 20:26:23.150050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:26:23.250265       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:26:23.250370       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1014 20:26:23.250482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:26:24.166639       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 20:26:24.166712       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:26:24.319706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:26:24.320064       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:26:24.320240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:24.338325       1 config.go:200] "Starting service config controller"
	I1014 20:26:24.338693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:26:24.338764       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:26:24.338796       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:26:24.338833       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:26:24.338859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:26:24.341262       1 config.go:309] "Starting node config controller"
	I1014 20:26:24.342084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:26:24.342169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:26:24.440637       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:26:24.440752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:26:24.440814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1df4877419cf704774f6c1342d72a03a045a372f39ae449678e06f3d08045f42] <==
	I1014 20:25:54.629182       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:26:05.981471       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1014 20:26:05.981507       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:26:05.981516       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:26:19.941274       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:26:19.947786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:26:19.950118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.952381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:26:19.954023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:26:19.955708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:26:20.152543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.242979     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-378189\" already exists" pod="kube-system/kube-apiserver-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.243174     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.266372     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-378189\" already exists" pod="kube-system/kube-controller-manager-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.266545     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.277022     795 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-378189\" already exists" pod="kube-system/kube-scheduler-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.335489     795 apiserver.go:52] "Watching apiserver"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.341320     795 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-378189" podUID="58e47d49-6119-454a-8ef7-5ff6d46dbc8e"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377684     795 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.377713     795 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: E1014 20:26:20.380550     795 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58e47d49-6119-454a-8ef7-5ff6d46dbc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T20:25:50Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"cri-o://6eb0610f607e1c2adb8b4aa2a97c40900841b7ac3d7c573af4ffb1649fa6481b\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.1\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T20:25:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-10-14T20:25:50Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-378189\": pods \"kube-vip-ha-378189\" not found" pod="kube-system/kube-vip-ha-378189"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.384848     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6eb6d052fb3fa4b9cbf90736fdb46e" path="/var/lib/kubelet/pods/5a6eb6d052fb3fa4b9cbf90736fdb46e/volumes"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.439465     795 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474075     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-cni-cfg\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474378     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-xtables-lock\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474711     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcdfe744-7240-4d2c-887b-3ace1aaf041d-lib-modules\") pod \"kindnet-xs5v9\" (UID: \"fcdfe744-7240-4d2c-887b-3ace1aaf041d\") " pod="kube-system/kindnet-xs5v9"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474808     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-lib-modules\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474885     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6cd440a2-0b51-4f78-8285-48bf107cf288-tmp\") pod \"storage-provisioner\" (UID: \"6cd440a2-0b51-4f78-8285-48bf107cf288\") " pod="kube-system/storage-provisioner"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.474996     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c29665fb-40d6-4509-be63-bcb379888e1a-xtables-lock\") pod \"kube-proxy-vcfj7\" (UID: \"c29665fb-40d6-4509-be63-bcb379888e1a\") " pod="kube-system/kube-proxy-vcfj7"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.489643     795 scope.go:117] "RemoveContainer" containerID="29d51a80b14a0f25f45cfca782f7ec24806bb067ccfbf71ca7754867bd5b0dab"
	Oct 14 20:26:20 ha-378189 kubelet[795]: I1014 20:26:20.579639     795 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 20:26:21 ha-378189 kubelet[795]: W1014 20:26:20.998774     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio-c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0 WatchSource:0}: Error finding container c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0: Status 404 returned error can't find the container with id c1d0c1569b9ee9cc4effe5674d31a059d49f0e7345bbd9f839e16f3b2890ffd0
	Oct 14 20:26:21 ha-378189 kubelet[795]: I1014 20:26:21.651722     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-378189" podStartSLOduration=1.6517024089999999 podStartE2EDuration="1.651702409s" podCreationTimestamp="2025-10-14 20:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 20:26:21.43011465 +0000 UTC m=+31.232674378" watchObservedRunningTime="2025-10-14 20:26:21.651702409 +0000 UTC m=+31.454262129"
	Oct 14 20:26:50 ha-378189 kubelet[795]: E1014 20:26:50.327193     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc"
	Oct 14 20:26:50 ha-378189 kubelet[795]: I1014 20:26:50.327265     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc" err="rpc error: code = NotFound desc = could not find container \"9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc\": container with ID starting with 9ed75cd94e8fcbf8b6d07b018e25b54958fa14a2404e6404aa955941052c61cc not found: ID does not exist"
	Oct 14 20:26:52 ha-378189 kubelet[795]: I1014 20:26:52.625158     795 scope.go:117] "RemoveContainer" containerID="363f25aac2f0263c8db462f13cef072fd6f75d27229590360981202746ac71ab"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-378189 -n ha-378189
helpers_test.go:269: (dbg) Run:  kubectl --context ha-378189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q7g99
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99
helpers_test.go:290: (dbg) kubectl --context ha-378189 describe pod busybox-7b57f96db7-q7g99:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q7g99
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mll6g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mll6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.23s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-897764 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-897764 --output=json --user=testUser: exit status 80 (2.43932146s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1351897-c97b-4eca-a67f-d6d60d06066d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-897764 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"af849cb0-b180-4766-a661-4265b46954a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-14T20:38:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"21528d0e-4e02-4b1d-aea3-53618879cb59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-897764 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.11s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-897764 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-897764 --output=json --user=testUser: exit status 80 (2.10833933s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40683b7d-2796-4a01-ac4a-37e6bb8d1d10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-897764 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7209de71-ead3-4a96-b099-3bd9653a156b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-14T20:39:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"13c2c7ca-f083-4ff5-9a16-69d2ad71776c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-897764 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.11s)

                                                
                                    
x
+
TestPause/serial/Pause (7.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-361043 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-361043 --alsologtostderr -v=5: exit status 80 (2.435355066s)

                                                
                                                
-- stdout --
	* Pausing node pause-361043 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:03:02.913408  708502 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:03:02.914031  708502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:03:02.914046  708502 out.go:374] Setting ErrFile to fd 2...
	I1014 21:03:02.914052  708502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:03:02.914327  708502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:03:02.914596  708502 out.go:368] Setting JSON to false
	I1014 21:03:02.914622  708502 mustload.go:65] Loading cluster: pause-361043
	I1014 21:03:02.915104  708502 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:03:02.915563  708502 cli_runner.go:164] Run: docker container inspect pause-361043 --format={{.State.Status}}
	I1014 21:03:02.932683  708502 host.go:66] Checking if "pause-361043" exists ...
	I1014 21:03:02.932997  708502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:03:02.999094  708502 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:03:02.988604284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:03:02.999825  708502 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-361043 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:03:03.003462  708502 out.go:179] * Pausing node pause-361043 ... 
	I1014 21:03:03.007494  708502 host.go:66] Checking if "pause-361043" exists ...
	I1014 21:03:03.007888  708502 ssh_runner.go:195] Run: systemctl --version
	I1014 21:03:03.007948  708502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:03:03.026257  708502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:03:03.131286  708502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:03:03.144076  708502 pause.go:52] kubelet running: true
	I1014 21:03:03.144198  708502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:03:03.351566  708502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:03:03.351659  708502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:03:03.424357  708502 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:03:03.424423  708502 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:03:03.424444  708502 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:03:03.424464  708502 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:03:03.424499  708502 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:03:03.424523  708502 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:03:03.424540  708502 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:03:03.424560  708502 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:03:03.424580  708502 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:03:03.424619  708502 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:03:03.424644  708502 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:03:03.424664  708502 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:03:03.424684  708502 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:03:03.424705  708502 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:03:03.424732  708502 cri.go:89] found id: ""
	I1014 21:03:03.424811  708502 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:03:03.435530  708502 retry.go:31] will retry after 279.834759ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:03:03Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:03:03.716023  708502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:03:03.732605  708502 pause.go:52] kubelet running: false
	I1014 21:03:03.732721  708502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:03:03.917441  708502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:03:03.917529  708502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:03:03.990856  708502 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:03:03.990876  708502 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:03:03.990882  708502 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:03:03.990900  708502 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:03:03.990904  708502 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:03:03.990908  708502 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:03:03.990911  708502 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:03:03.990914  708502 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:03:03.990917  708502 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:03:03.990923  708502 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:03:03.990926  708502 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:03:03.990929  708502 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:03:03.990932  708502 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:03:03.990935  708502 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:03:03.990938  708502 cri.go:89] found id: ""
	I1014 21:03:03.990986  708502 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:03:04.003350  708502 retry.go:31] will retry after 414.790698ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:03:04Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:03:04.419065  708502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:03:04.432471  708502 pause.go:52] kubelet running: false
	I1014 21:03:04.432540  708502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:03:04.577392  708502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:03:04.577480  708502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:03:04.649251  708502 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:03:04.649273  708502 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:03:04.649279  708502 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:03:04.649283  708502 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:03:04.649286  708502 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:03:04.649289  708502 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:03:04.649292  708502 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:03:04.649295  708502 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:03:04.649299  708502 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:03:04.649305  708502 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:03:04.649308  708502 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:03:04.649312  708502 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:03:04.649320  708502 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:03:04.649325  708502 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:03:04.649328  708502 cri.go:89] found id: ""
	I1014 21:03:04.649376  708502 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:03:04.659777  708502 retry.go:31] will retry after 371.823234ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:03:04Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:03:05.032403  708502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:03:05.046302  708502 pause.go:52] kubelet running: false
	I1014 21:03:05.046390  708502 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:03:05.191730  708502 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:03:05.191820  708502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:03:05.268595  708502 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:03:05.268627  708502 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:03:05.268633  708502 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:03:05.268636  708502 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:03:05.268640  708502 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:03:05.268644  708502 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:03:05.268648  708502 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:03:05.268651  708502 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:03:05.268655  708502 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:03:05.268665  708502 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:03:05.268671  708502 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:03:05.268674  708502 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:03:05.268677  708502 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:03:05.268681  708502 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:03:05.268684  708502 cri.go:89] found id: ""
	I1014 21:03:05.268738  708502 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:03:05.282795  708502 out.go:203] 
	W1014 21:03:05.285592  708502 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:03:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:03:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:03:05.285610  708502 out.go:285] * 
	* 
	W1014 21:03:05.293383  708502 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:03:05.298129  708502 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-361043 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-361043
helpers_test.go:243: (dbg) docker inspect pause-361043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b",
	        "Created": "2025-10-14T21:01:21.788832788Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702695,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:01:21.858603225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b-json.log",
	        "Name": "/pause-361043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-361043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-361043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b",
	                "LowerDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-361043",
	                "Source": "/var/lib/docker/volumes/pause-361043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-361043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-361043",
	                "name.minikube.sigs.k8s.io": "pause-361043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99702bf6229b2670218018918bb2ab6c88ab78094dbaa12a08c456125cfed0a0",
	            "SandboxKey": "/var/run/docker/netns/99702bf6229b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33757"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33758"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33761"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33759"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33760"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-361043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:ba:32:e2:c9:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d18a8d03867b6fb9628f9a0a89945c5a6d5f894c5375753085a048b866d2bee",
	                    "EndpointID": "ac07ba91ec238c308232d1b00e420101c6b77f1ad39b3bc714ac7b76bb844ce8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-361043",
	                        "30fbd794e3c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-361043 -n pause-361043
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-361043 -n pause-361043: exit status 2 (339.802472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-361043 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-361043 logs -n 25: (1.477562442s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-925071 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:56 UTC │ 14 Oct 25 20:56 UTC │
	│ start   │ -p missing-upgrade-558395 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-558395    │ jenkins │ v1.32.0 │ 14 Oct 25 20:56 UTC │ 14 Oct 25 20:57 UTC │
	│ start   │ -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:57 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p missing-upgrade-558395 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-558395    │ jenkins │ v1.37.0 │ 14 Oct 25 20:57 UTC │ 14 Oct 25 20:58 UTC │
	│ delete  │ -p missing-upgrade-558395                                                                                                                │ missing-upgrade-558395    │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ delete  │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:59 UTC │
	│ stop    │ -p kubernetes-upgrade-791601                                                                                                             │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │                     │
	│ ssh     │ -p NoKubernetes-925071 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │                     │
	│ stop    │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p NoKubernetes-925071 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ ssh     │ -p NoKubernetes-925071 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │                     │
	│ delete  │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p stopped-upgrade-755343 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-755343    │ jenkins │ v1.32.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ stop    │ stopped-upgrade-755343 stop                                                                                                              │ stopped-upgrade-755343    │ jenkins │ v1.32.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p stopped-upgrade-755343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-755343    │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 21:00 UTC │
	│ delete  │ -p stopped-upgrade-755343                                                                                                                │ stopped-upgrade-755343    │ jenkins │ v1.37.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:00 UTC │
	│ start   │ -p running-upgrade-274014 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-274014    │ jenkins │ v1.32.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:00 UTC │
	│ start   │ -p running-upgrade-274014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-274014    │ jenkins │ v1.37.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:01 UTC │
	│ delete  │ -p running-upgrade-274014                                                                                                                │ running-upgrade-274014    │ jenkins │ v1.37.0 │ 14 Oct 25 21:01 UTC │ 14 Oct 25 21:01 UTC │
	│ start   │ -p pause-361043 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:01 UTC │ 14 Oct 25 21:02 UTC │
	│ start   │ -p pause-361043 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:02 UTC │ 14 Oct 25 21:03 UTC │
	│ pause   │ -p pause-361043 --alsologtostderr -v=5                                                                                                   │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:02:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:02:35.328609  706484 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:02:35.328788  706484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:02:35.328798  706484 out.go:374] Setting ErrFile to fd 2...
	I1014 21:02:35.328804  706484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:02:35.329080  706484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:02:35.329471  706484 out.go:368] Setting JSON to false
	I1014 21:02:35.330605  706484 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13506,"bootTime":1760462250,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:02:35.330690  706484 start.go:141] virtualization:  
	I1014 21:02:35.333797  706484 out.go:179] * [pause-361043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:02:35.337727  706484 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:02:35.337830  706484 notify.go:220] Checking for updates...
	I1014 21:02:35.344273  706484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:02:35.347180  706484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:02:35.350168  706484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:02:35.353145  706484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:02:35.356062  706484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:02:35.359765  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:35.360429  706484 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:02:35.382488  706484 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:02:35.382678  706484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:02:35.448777  706484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:02:35.439240457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:02:35.448904  706484 docker.go:318] overlay module found
	I1014 21:02:35.452116  706484 out.go:179] * Using the docker driver based on existing profile
	I1014 21:02:35.454906  706484 start.go:305] selected driver: docker
	I1014 21:02:35.454926  706484 start.go:925] validating driver "docker" against &{Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:35.455055  706484 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:02:35.455192  706484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:02:35.515367  706484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:02:35.505037388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:02:35.515816  706484 cni.go:84] Creating CNI manager for ""
	I1014 21:02:35.515883  706484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:02:35.515929  706484 start.go:349] cluster config:
	{Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:35.520941  706484 out.go:179] * Starting "pause-361043" primary control-plane node in "pause-361043" cluster
	I1014 21:02:35.523837  706484 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:02:35.526716  706484 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:02:35.529649  706484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:02:35.529712  706484 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:02:35.529727  706484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:02:35.529741  706484 cache.go:58] Caching tarball of preloaded images
	I1014 21:02:35.529829  706484 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:02:35.529839  706484 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:02:35.529971  706484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/config.json ...
	I1014 21:02:35.557784  706484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:02:35.557808  706484 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:02:35.557838  706484 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:02:35.557861  706484 start.go:360] acquireMachinesLock for pause-361043: {Name:mk981938910bdc0a2c695c21ebe32bffff4c7396 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:02:35.557933  706484 start.go:364] duration metric: took 48.591µs to acquireMachinesLock for "pause-361043"
	I1014 21:02:35.557956  706484 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:02:35.557968  706484 fix.go:54] fixHost starting: 
	I1014 21:02:35.558252  706484 cli_runner.go:164] Run: docker container inspect pause-361043 --format={{.State.Status}}
	I1014 21:02:35.575142  706484 fix.go:112] recreateIfNeeded on pause-361043: state=Running err=<nil>
	W1014 21:02:35.575178  706484 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:02:34.795160  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:36.748706  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:51520->192.168.85.2:8443: read: connection reset by peer
	I1014 21:02:36.748762  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:36.748823  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:36.789476  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:36.789495  688786 cri.go:89] found id: ""
	I1014 21:02:36.789503  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:36.789563  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.795222  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:36.795295  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:36.828214  688786 cri.go:89] found id: ""
	I1014 21:02:36.828235  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.828243  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:36.828250  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:36.828338  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:36.865761  688786 cri.go:89] found id: ""
	I1014 21:02:36.865798  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.865808  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:36.865818  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:36.865883  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:36.907394  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:36.907417  688786 cri.go:89] found id: ""
	I1014 21:02:36.907426  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:36.907482  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.912891  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:36.912960  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:36.956034  688786 cri.go:89] found id: ""
	I1014 21:02:36.956061  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.956071  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:36.956077  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:36.956143  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:36.988130  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:36.988155  688786 cri.go:89] found id: "569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:36.988160  688786 cri.go:89] found id: ""
	I1014 21:02:36.988168  688786 logs.go:282] 2 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91]
	I1014 21:02:36.988261  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.992198  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.995946  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:36.996017  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:37.032956  688786 cri.go:89] found id: ""
	I1014 21:02:37.032978  688786 logs.go:282] 0 containers: []
	W1014 21:02:37.032986  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:37.032992  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:37.033051  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:37.081811  688786 cri.go:89] found id: ""
	I1014 21:02:37.081836  688786 logs.go:282] 0 containers: []
	W1014 21:02:37.081846  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:37.081860  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:37.081872  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:37.178939  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:37.178957  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:37.178969  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:37.217522  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:37.217554  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:37.275912  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:37.275950  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:37.304387  688786 logs.go:123] Gathering logs for kube-controller-manager [569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91] ...
	I1014 21:02:37.304415  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:37.331033  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:37.331065  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:37.364484  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:37.364514  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:37.476455  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:37.476489  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:37.494542  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:37.494569  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:35.578559  706484 out.go:252] * Updating the running docker "pause-361043" container ...
	I1014 21:02:35.578590  706484 machine.go:93] provisionDockerMachine start ...
	I1014 21:02:35.578682  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.595764  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.596122  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.596136  706484 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:02:35.743995  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-361043
	
	I1014 21:02:35.744016  706484 ubuntu.go:182] provisioning hostname "pause-361043"
	I1014 21:02:35.744078  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.761154  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.761458  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.761484  706484 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-361043 && echo "pause-361043" | sudo tee /etc/hostname
	I1014 21:02:35.915096  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-361043
	
	I1014 21:02:35.915189  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.934670  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.934991  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.935021  706484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-361043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-361043/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-361043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:02:36.084721  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:02:36.084749  706484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:02:36.084778  706484 ubuntu.go:190] setting up certificates
	I1014 21:02:36.084788  706484 provision.go:84] configureAuth start
	I1014 21:02:36.084851  706484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-361043
	I1014 21:02:36.103487  706484 provision.go:143] copyHostCerts
	I1014 21:02:36.103561  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:02:36.103576  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:02:36.103655  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:02:36.103760  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:02:36.103772  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:02:36.103800  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:02:36.103867  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:02:36.103877  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:02:36.103904  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:02:36.103958  706484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.pause-361043 san=[127.0.0.1 192.168.76.2 localhost minikube pause-361043]
	I1014 21:02:36.638106  706484 provision.go:177] copyRemoteCerts
	I1014 21:02:36.638175  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:02:36.638218  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:36.655818  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:36.761114  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:02:36.782443  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 21:02:36.804103  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:02:36.836122  706484 provision.go:87] duration metric: took 751.307293ms to configureAuth
	I1014 21:02:36.836188  706484 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:02:36.836553  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:36.836702  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:36.861250  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:36.861562  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:36.861577  706484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:02:40.060157  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:40.060952  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:40.061032  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:40.061131  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:40.096504  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:40.096526  688786 cri.go:89] found id: ""
	I1014 21:02:40.096535  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:40.096603  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.101441  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:40.101524  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:40.132721  688786 cri.go:89] found id: ""
	I1014 21:02:40.132749  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.132759  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:40.132766  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:40.132834  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:40.164958  688786 cri.go:89] found id: ""
	I1014 21:02:40.164986  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.164995  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:40.165002  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:40.165067  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:40.194049  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:40.194071  688786 cri.go:89] found id: ""
	I1014 21:02:40.194080  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:40.194138  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.198247  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:40.198320  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:40.225200  688786 cri.go:89] found id: ""
	I1014 21:02:40.225226  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.225234  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:40.225240  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:40.225297  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:40.252152  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:40.252176  688786 cri.go:89] found id: "569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:40.252181  688786 cri.go:89] found id: ""
	I1014 21:02:40.252193  688786 logs.go:282] 2 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91]
	I1014 21:02:40.252247  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.256028  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.259583  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:40.259656  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:40.284923  688786 cri.go:89] found id: ""
	I1014 21:02:40.284948  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.284957  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:40.284963  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:40.285018  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:40.315678  688786 cri.go:89] found id: ""
	I1014 21:02:40.315703  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.315712  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:40.315726  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:40.315739  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:40.348297  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:40.348347  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:40.460256  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:40.460289  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:40.524641  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:40.524664  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:40.524677  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:40.592499  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:40.592535  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:40.611040  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:40.611087  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:40.645800  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:40.645833  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:40.672096  688786 logs.go:123] Gathering logs for kube-controller-manager [569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91] ...
	I1014 21:02:40.672125  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:40.699928  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:40.699955  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:43.259936  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:43.260348  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:43.260393  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:43.260451  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:43.298692  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:43.298710  688786 cri.go:89] found id: ""
	I1014 21:02:43.298717  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:43.298778  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.305594  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:43.305659  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:43.339238  688786 cri.go:89] found id: ""
	I1014 21:02:43.339259  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.339267  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:43.339273  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:43.339331  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:43.374353  688786 cri.go:89] found id: ""
	I1014 21:02:43.374373  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.374381  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:43.374388  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:43.374442  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:42.244942  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:02:42.244967  706484 machine.go:96] duration metric: took 6.666367806s to provisionDockerMachine
	I1014 21:02:42.244978  706484 start.go:293] postStartSetup for "pause-361043" (driver="docker")
	I1014 21:02:42.244988  706484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:02:42.245053  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:02:42.245103  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.264616  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.368833  706484 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:02:42.372441  706484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:02:42.372474  706484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:02:42.372496  706484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:02:42.372552  706484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:02:42.372633  706484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:02:42.372741  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:02:42.380335  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:02:42.398088  706484 start.go:296] duration metric: took 153.094105ms for postStartSetup
	I1014 21:02:42.398218  706484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:02:42.398265  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.414920  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.517879  706484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:02:42.523459  706484 fix.go:56] duration metric: took 6.965487793s for fixHost
	I1014 21:02:42.523485  706484 start.go:83] releasing machines lock for "pause-361043", held for 6.965540339s
	I1014 21:02:42.523571  706484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-361043
	I1014 21:02:42.546401  706484 ssh_runner.go:195] Run: cat /version.json
	I1014 21:02:42.546451  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.546747  706484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:02:42.546801  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.570468  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.580899  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.672375  706484 ssh_runner.go:195] Run: systemctl --version
	I1014 21:02:42.772519  706484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:02:42.814978  706484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:02:42.819450  706484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:02:42.819566  706484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:02:42.827766  706484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:02:42.827787  706484 start.go:495] detecting cgroup driver to use...
	I1014 21:02:42.827818  706484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:02:42.827864  706484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:02:42.843664  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:02:42.857570  706484 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:02:42.857653  706484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:02:42.873200  706484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:02:42.886511  706484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:02:43.013628  706484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:02:43.153959  706484 docker.go:234] disabling docker service ...
	I1014 21:02:43.154063  706484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:02:43.169482  706484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:02:43.182580  706484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:02:43.354240  706484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:02:43.507120  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:02:43.520712  706484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:02:43.538125  706484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:02:43.538284  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.549287  706484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:02:43.549440  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.561831  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.571258  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.580035  706484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:02:43.588673  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.598220  706484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.607665  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.626947  706484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:02:43.642739  706484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:02:43.654185  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:43.821763  706484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:02:44.029280  706484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:02:44.029353  706484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:02:44.035512  706484 start.go:563] Will wait 60s for crictl version
	I1014 21:02:44.035582  706484 ssh_runner.go:195] Run: which crictl
	I1014 21:02:44.041230  706484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:02:44.069027  706484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:02:44.069107  706484 ssh_runner.go:195] Run: crio --version
	I1014 21:02:44.106385  706484 ssh_runner.go:195] Run: crio --version
	I1014 21:02:44.146674  706484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:02:44.149616  706484 cli_runner.go:164] Run: docker network inspect pause-361043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:02:44.171262  706484 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:02:44.175218  706484 kubeadm.go:883] updating cluster {Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:02:44.175362  706484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:02:44.175425  706484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:02:44.210763  706484 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:02:44.210788  706484 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:02:44.210844  706484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:02:44.240574  706484 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:02:44.240600  706484 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:02:44.240610  706484 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:02:44.240711  706484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-361043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:02:44.240795  706484 ssh_runner.go:195] Run: crio config
	I1014 21:02:44.295973  706484 cni.go:84] Creating CNI manager for ""
	I1014 21:02:44.296001  706484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:02:44.296017  706484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:02:44.296074  706484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-361043 NodeName:pause-361043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:02:44.296214  706484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-361043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:02:44.296292  706484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:02:44.305110  706484 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:02:44.305178  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:02:44.312852  706484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1014 21:02:44.340619  706484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:02:44.360728  706484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1014 21:02:44.384620  706484 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:02:44.389734  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:44.615182  706484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:02:44.652609  706484 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043 for IP: 192.168.76.2
	I1014 21:02:44.652632  706484 certs.go:195] generating shared ca certs ...
	I1014 21:02:44.652648  706484 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:44.652784  706484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:02:44.652837  706484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:02:44.652850  706484 certs.go:257] generating profile certs ...
	I1014 21:02:44.652937  706484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key
	I1014 21:02:44.653002  706484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.key.04b348e2
	I1014 21:02:44.653048  706484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.key
	I1014 21:02:44.653162  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:02:44.653196  706484 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:02:44.653208  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:02:44.653233  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:02:44.653259  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:02:44.653285  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:02:44.653329  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:02:44.653915  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:02:44.697188  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:02:44.733299  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:02:44.764416  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:02:44.792879  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 21:02:44.824026  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:02:44.857514  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:02:44.890074  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:02:44.919963  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:02:44.956500  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:02:44.976935  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:02:45.003667  706484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:02:45.037000  706484 ssh_runner.go:195] Run: openssl version
	I1014 21:02:45.046343  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:02:45.058374  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.064175  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.064360  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.120177  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:02:45.133541  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:02:45.147775  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.157018  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.157165  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.216626  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:02:45.230494  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:02:45.243084  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.249373  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.249549  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.300850  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:02:45.312078  706484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:02:45.319420  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:02:45.395790  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:02:45.450593  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:02:45.502110  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:02:45.564364  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:02:45.608763  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:02:45.655050  706484 kubeadm.go:400] StartCluster: {Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:45.655249  706484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:02:45.655349  706484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:02:45.687638  706484 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:02:45.687712  706484 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:02:45.687734  706484 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:02:45.687753  706484 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:02:45.687787  706484 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:02:45.687819  706484 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:02:45.687837  706484 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:02:45.687867  706484 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:02:45.687888  706484 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:02:45.687912  706484 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:02:45.687929  706484 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:02:45.687960  706484 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:02:45.687978  706484 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:02:45.687999  706484 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:02:45.688031  706484 cri.go:89] found id: ""
	I1014 21:02:45.688118  706484 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:02:45.701440  706484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:02:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:02:45.701599  706484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:02:45.710873  706484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:02:45.710948  706484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:02:45.711031  706484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:02:45.719575  706484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:02:45.720387  706484 kubeconfig.go:125] found "pause-361043" server: "https://192.168.76.2:8443"
	I1014 21:02:45.721370  706484 kapi.go:59] client config for pause-361043: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 21:02:45.722279  706484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 21:02:45.722332  706484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 21:02:45.722354  706484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 21:02:45.722378  706484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 21:02:45.722413  706484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 21:02:45.724271  706484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:02:45.734384  706484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1014 21:02:45.734466  706484 kubeadm.go:601] duration metric: took 23.49702ms to restartPrimaryControlPlane
	I1014 21:02:45.734491  706484 kubeadm.go:402] duration metric: took 79.450371ms to StartCluster
	I1014 21:02:45.734535  706484 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:45.734632  706484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:02:45.735617  706484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:45.735942  706484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:02:45.736517  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:45.736675  706484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:02:45.739201  706484 out.go:179] * Verifying Kubernetes components...
	I1014 21:02:45.739289  706484 out.go:179] * Enabled addons: 
	I1014 21:02:43.431633  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:43.431653  688786 cri.go:89] found id: ""
	I1014 21:02:43.431661  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:43.431715  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.435792  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:43.435859  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:43.462770  688786 cri.go:89] found id: ""
	I1014 21:02:43.462792  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.462800  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:43.462806  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:43.462867  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:43.493417  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:43.493437  688786 cri.go:89] found id: ""
	I1014 21:02:43.493444  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:43.493499  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.498596  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:43.498717  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:43.553779  688786 cri.go:89] found id: ""
	I1014 21:02:43.553855  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.553879  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:43.553902  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:43.554021  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:43.598002  688786 cri.go:89] found id: ""
	I1014 21:02:43.598028  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.598037  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:43.598046  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:43.598056  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:43.731962  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:43.732000  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:43.754726  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:43.754757  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:43.849414  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:43.849437  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:43.849450  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:43.897221  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:43.897297  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:43.978844  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:43.978919  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:44.011861  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:44.011908  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:44.084191  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:44.084232  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:46.628366  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:46.628756  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:46.628793  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:46.628845  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:46.673754  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:46.673773  688786 cri.go:89] found id: ""
	I1014 21:02:46.673781  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:46.673836  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.680141  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:46.680213  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:46.725836  688786 cri.go:89] found id: ""
	I1014 21:02:46.725858  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.725866  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:46.725872  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:46.725935  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:46.789793  688786 cri.go:89] found id: ""
	I1014 21:02:46.789815  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.789823  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:46.789829  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:46.789886  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:46.848694  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:46.848713  688786 cri.go:89] found id: ""
	I1014 21:02:46.848721  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:46.848774  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.853781  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:46.853852  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:46.899768  688786 cri.go:89] found id: ""
	I1014 21:02:46.899788  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.899796  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:46.899802  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:46.899858  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:46.927426  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:46.927445  688786 cri.go:89] found id: ""
	I1014 21:02:46.927453  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:46.927511  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.931592  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:46.931709  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:46.991428  688786 cri.go:89] found id: ""
	I1014 21:02:46.991496  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.991518  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:46.991540  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:46.991630  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:47.024967  688786 cri.go:89] found id: ""
	I1014 21:02:47.025046  688786 logs.go:282] 0 containers: []
	W1014 21:02:47.025069  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:47.025110  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:47.025138  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:47.106553  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:47.106588  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:47.176875  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:47.176899  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:47.332904  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:47.332987  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:47.356207  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:47.356282  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:47.491980  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:47.492044  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:47.492071  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:47.547011  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:47.547108  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:47.639477  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:47.639562  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:45.741178  706484 addons.go:514] duration metric: took 4.498351ms for enable addons: enabled=[]
	I1014 21:02:45.741286  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:45.977712  706484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:02:45.992884  706484 node_ready.go:35] waiting up to 6m0s for node "pause-361043" to be "Ready" ...
	I1014 21:02:50.188421  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:50.188848  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:50.188895  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:50.188953  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:50.219510  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:50.219537  688786 cri.go:89] found id: ""
	I1014 21:02:50.219545  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:50.219600  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.224149  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:50.224222  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:50.263402  688786 cri.go:89] found id: ""
	I1014 21:02:50.263430  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.263438  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:50.263444  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:50.263501  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:50.307374  688786 cri.go:89] found id: ""
	I1014 21:02:50.307403  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.307412  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:50.307419  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:50.307478  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:50.338712  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:50.338738  688786 cri.go:89] found id: ""
	I1014 21:02:50.338746  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:50.338801  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.343287  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:50.343364  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:50.372366  688786 cri.go:89] found id: ""
	I1014 21:02:50.372394  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.372402  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:50.372408  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:50.372464  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:50.402116  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:50.402142  688786 cri.go:89] found id: ""
	I1014 21:02:50.402151  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:50.402212  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.406988  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:50.407080  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:50.447089  688786 cri.go:89] found id: ""
	I1014 21:02:50.447120  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.447134  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:50.447141  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:50.447205  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:50.497891  688786 cri.go:89] found id: ""
	I1014 21:02:50.497925  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.497937  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:50.497947  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:50.497959  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:50.529078  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:50.529117  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:50.647754  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:50.647778  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:50.647791  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:50.709326  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:50.709363  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:50.803709  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:50.803786  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:50.840292  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:50.840343  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:50.910857  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:50.911024  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:50.955471  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:50.955497  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:50.908034  706484 node_ready.go:49] node "pause-361043" is "Ready"
	I1014 21:02:50.908059  706484 node_ready.go:38] duration metric: took 4.915147993s for node "pause-361043" to be "Ready" ...
	I1014 21:02:50.908072  706484 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:02:50.908130  706484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:02:50.931861  706484 api_server.go:72] duration metric: took 5.19584253s to wait for apiserver process to appear ...
	I1014 21:02:50.931884  706484 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:02:50.931904  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:50.960009  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 21:02:50.960034  706484 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 21:02:51.432257  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:51.441841  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:02:51.441885  706484 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:02:51.932386  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:51.941629  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 21:02:51.942633  706484 api_server.go:141] control plane version: v1.34.1
	I1014 21:02:51.942660  706484 api_server.go:131] duration metric: took 1.010768565s to wait for apiserver health ...
	I1014 21:02:51.942669  706484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:02:51.949176  706484 system_pods.go:59] 7 kube-system pods found
	I1014 21:02:51.949212  706484 system_pods.go:61] "coredns-66bc5c9577-q7c7w" [b735bb9e-6349-42f7-bcb7-469aa10cf818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:02:51.949222  706484 system_pods.go:61] "etcd-pause-361043" [11b46cdc-422e-44c6-bb9a-05a404336542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:02:51.949228  706484 system_pods.go:61] "kindnet-9c498" [6487afd8-213d-4a40-b1e4-4e6b93d23583] Running
	I1014 21:02:51.949235  706484 system_pods.go:61] "kube-apiserver-pause-361043" [adcb78e6-b272-4878-9f25-dbbc7f0d3565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:02:51.949242  706484 system_pods.go:61] "kube-controller-manager-pause-361043" [53842385-4844-45db-97ef-eaec0f03796d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:02:51.949252  706484 system_pods.go:61] "kube-proxy-fzbsz" [e0e8d997-af68-4fec-9867-c0a08098f401] Running
	I1014 21:02:51.949259  706484 system_pods.go:61] "kube-scheduler-pause-361043" [61b58669-642e-4139-a8b0-db07e49496d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:02:51.949267  706484 system_pods.go:74] duration metric: took 6.593256ms to wait for pod list to return data ...
	I1014 21:02:51.949275  706484 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:02:51.999784  706484 default_sa.go:45] found service account: "default"
	I1014 21:02:51.999815  706484 default_sa.go:55] duration metric: took 50.527783ms for default service account to be created ...
	I1014 21:02:51.999833  706484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:02:52.047564  706484 system_pods.go:86] 7 kube-system pods found
	I1014 21:02:52.047601  706484 system_pods.go:89] "coredns-66bc5c9577-q7c7w" [b735bb9e-6349-42f7-bcb7-469aa10cf818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:02:52.047612  706484 system_pods.go:89] "etcd-pause-361043" [11b46cdc-422e-44c6-bb9a-05a404336542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:02:52.047619  706484 system_pods.go:89] "kindnet-9c498" [6487afd8-213d-4a40-b1e4-4e6b93d23583] Running
	I1014 21:02:52.047628  706484 system_pods.go:89] "kube-apiserver-pause-361043" [adcb78e6-b272-4878-9f25-dbbc7f0d3565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:02:52.047654  706484 system_pods.go:89] "kube-controller-manager-pause-361043" [53842385-4844-45db-97ef-eaec0f03796d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:02:52.047659  706484 system_pods.go:89] "kube-proxy-fzbsz" [e0e8d997-af68-4fec-9867-c0a08098f401] Running
	I1014 21:02:52.047666  706484 system_pods.go:89] "kube-scheduler-pause-361043" [61b58669-642e-4139-a8b0-db07e49496d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:02:52.047677  706484 system_pods.go:126] duration metric: took 47.837195ms to wait for k8s-apps to be running ...
	I1014 21:02:52.047690  706484 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:02:52.047750  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:02:52.062435  706484 system_svc.go:56] duration metric: took 14.735858ms WaitForService to wait for kubelet
	I1014 21:02:52.062462  706484 kubeadm.go:586] duration metric: took 6.326448482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:02:52.062482  706484 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:02:52.065968  706484 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:02:52.066003  706484 node_conditions.go:123] node cpu capacity is 2
	I1014 21:02:52.066017  706484 node_conditions.go:105] duration metric: took 3.529099ms to run NodePressure ...
	I1014 21:02:52.066030  706484 start.go:241] waiting for startup goroutines ...
	I1014 21:02:52.066042  706484 start.go:246] waiting for cluster config update ...
	I1014 21:02:52.066055  706484 start.go:255] writing updated cluster config ...
	I1014 21:02:52.066390  706484 ssh_runner.go:195] Run: rm -f paused
	I1014 21:02:52.070338  706484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:02:52.070963  706484 kapi.go:59] client config for pause-361043: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 21:02:52.074539  706484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7c7w" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:02:54.085710  706484 pod_ready.go:104] pod "coredns-66bc5c9577-q7c7w" is not "Ready", error: <nil>
	I1014 21:02:53.603267  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:53.603628  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:53.603668  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:53.603720  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:53.632145  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:53.632164  688786 cri.go:89] found id: ""
	I1014 21:02:53.632173  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:53.632232  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.636186  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:53.636253  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:53.666435  688786 cri.go:89] found id: ""
	I1014 21:02:53.666464  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.666473  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:53.666480  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:53.666581  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:53.693524  688786 cri.go:89] found id: ""
	I1014 21:02:53.693548  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.693557  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:53.693564  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:53.693621  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:53.726454  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:53.726477  688786 cri.go:89] found id: ""
	I1014 21:02:53.726486  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:53.726544  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.730373  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:53.730446  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:53.757223  688786 cri.go:89] found id: ""
	I1014 21:02:53.757249  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.757260  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:53.757266  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:53.757323  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:53.787437  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:53.787458  688786 cri.go:89] found id: ""
	I1014 21:02:53.787467  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:53.787529  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.791362  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:53.791446  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:53.820604  688786 cri.go:89] found id: ""
	I1014 21:02:53.820636  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.820645  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:53.820651  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:53.820710  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:53.848287  688786 cri.go:89] found id: ""
	I1014 21:02:53.848331  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.848341  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:53.848350  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:53.848362  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:53.929431  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:53.929452  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:53.929465  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:53.965225  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:53.965255  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:54.028801  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:54.028839  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:54.058038  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:54.058067  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:54.123579  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:54.123618  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:54.155877  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:54.155901  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:54.275924  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:54.275961  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:56.796079  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:56.796577  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:56.796626  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:56.796687  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:56.823493  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:56.823511  688786 cri.go:89] found id: ""
	I1014 21:02:56.823518  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:56.823574  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.827444  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:56.827517  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:56.857837  688786 cri.go:89] found id: ""
	I1014 21:02:56.857863  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.857872  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:56.857878  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:56.857933  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:56.884003  688786 cri.go:89] found id: ""
	I1014 21:02:56.884025  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.884033  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:56.884040  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:56.884098  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:56.911730  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:56.911799  688786 cri.go:89] found id: ""
	I1014 21:02:56.911811  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:56.911901  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.915904  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:56.916029  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:56.942504  688786 cri.go:89] found id: ""
	I1014 21:02:56.942585  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.942603  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:56.942610  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:56.942681  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:56.969659  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:56.969681  688786 cri.go:89] found id: ""
	I1014 21:02:56.969691  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:56.969771  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.973670  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:56.973758  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:57.012390  688786 cri.go:89] found id: ""
	I1014 21:02:57.012415  688786 logs.go:282] 0 containers: []
	W1014 21:02:57.012424  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:57.012431  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:57.012491  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:57.048661  688786 cri.go:89] found id: ""
	I1014 21:02:57.048685  688786 logs.go:282] 0 containers: []
	W1014 21:02:57.048694  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:57.048703  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:57.048715  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:57.081104  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:57.081139  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:57.149555  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:57.149591  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:57.177380  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:57.177408  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:57.245232  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:57.245278  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:57.278921  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:57.278949  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:57.399467  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:57.399504  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:57.417849  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:57.417882  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:57.490254  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:56.581093  706484 pod_ready.go:94] pod "coredns-66bc5c9577-q7c7w" is "Ready"
	I1014 21:02:56.581124  706484 pod_ready.go:86] duration metric: took 4.506557043s for pod "coredns-66bc5c9577-q7c7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.583709  706484 pod_ready.go:83] waiting for pod "etcd-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.587858  706484 pod_ready.go:94] pod "etcd-pause-361043" is "Ready"
	I1014 21:02:56.587887  706484 pod_ready.go:86] duration metric: took 4.150079ms for pod "etcd-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.590065  706484 pod_ready.go:83] waiting for pod "kube-apiserver-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.594215  706484 pod_ready.go:94] pod "kube-apiserver-pause-361043" is "Ready"
	I1014 21:02:56.594245  706484 pod_ready.go:86] duration metric: took 4.151843ms for pod "kube-apiserver-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.596684  706484 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:02:58.602598  706484 pod_ready.go:104] pod "kube-controller-manager-pause-361043" is not "Ready", error: <nil>
	W1014 21:03:01.102193  706484 pod_ready.go:104] pod "kube-controller-manager-pause-361043" is not "Ready", error: <nil>
	I1014 21:03:02.602627  706484 pod_ready.go:94] pod "kube-controller-manager-pause-361043" is "Ready"
	I1014 21:03:02.602654  706484 pod_ready.go:86] duration metric: took 6.005912904s for pod "kube-controller-manager-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.604940  706484 pod_ready.go:83] waiting for pod "kube-proxy-fzbsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.609472  706484 pod_ready.go:94] pod "kube-proxy-fzbsz" is "Ready"
	I1014 21:03:02.609499  706484 pod_ready.go:86] duration metric: took 4.532124ms for pod "kube-proxy-fzbsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.611854  706484 pod_ready.go:83] waiting for pod "kube-scheduler-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.778706  706484 pod_ready.go:94] pod "kube-scheduler-pause-361043" is "Ready"
	I1014 21:03:02.778737  706484 pod_ready.go:86] duration metric: took 166.856143ms for pod "kube-scheduler-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.778750  706484 pod_ready.go:40] duration metric: took 10.70837563s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:03:02.830930  706484 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:03:02.834155  706484 out.go:179] * Done! kubectl is now configured to use "pause-361043" cluster and "default" namespace by default
	I1014 21:02:59.991538  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:59.991935  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:59.991981  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:59.992040  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:03:00.100543  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:03:00.100624  688786 cri.go:89] found id: ""
	I1014 21:03:00.100649  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:03:00.100754  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.115475  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:03:00.115558  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:03:00.236792  688786 cri.go:89] found id: ""
	I1014 21:03:00.236819  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.236829  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:03:00.236837  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:03:00.236908  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:03:00.275544  688786 cri.go:89] found id: ""
	I1014 21:03:00.275571  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.275579  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:03:00.275587  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:03:00.275657  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:03:00.342716  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:03:00.342749  688786 cri.go:89] found id: ""
	I1014 21:03:00.342759  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:03:00.342851  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.348207  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:03:00.348290  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:03:00.383298  688786 cri.go:89] found id: ""
	I1014 21:03:00.383325  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.383334  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:03:00.383342  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:03:00.383404  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:03:00.416146  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:03:00.416173  688786 cri.go:89] found id: ""
	I1014 21:03:00.416182  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:03:00.416252  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.421039  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:03:00.421124  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:03:00.452733  688786 cri.go:89] found id: ""
	I1014 21:03:00.452765  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.452774  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:03:00.452781  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:03:00.452850  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:03:00.483787  688786 cri.go:89] found id: ""
	I1014 21:03:00.483866  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.483888  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:03:00.483914  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:03:00.483955  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:03:00.574561  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:03:00.574582  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:03:00.574597  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:03:00.628481  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:03:00.628517  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:03:00.713178  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:03:00.713212  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:03:00.748995  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:03:00.749028  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:03:00.815682  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:03:00.815714  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:03:00.849469  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:03:00.849539  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:03:00.976667  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:03:00.976703  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.532898821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.604466983Z" level=info msg="Created container 23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22: kube-system/etcd-pause-361043/etcd" id=d25b7f23-894b-4efe-96f1-6d64b3f204a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.610150187Z" level=info msg="Starting container: 23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22" id=88a9669f-e396-4f49-85d0-bf268bdff15b name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.622038859Z" level=info msg="Started container" PID=2215 containerID=23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22 description=kube-system/etcd-pause-361043/etcd id=88a9669f-e396-4f49-85d0-bf268bdff15b name=/runtime.v1.RuntimeService/StartContainer sandboxID=22734e2dda3249ff64753ab00f5a313383f9440852bd1485f849f62fdf47eacd
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.630617126Z" level=info msg="Created container fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c: kube-system/kube-apiserver-pause-361043/kube-apiserver" id=08d1a8fb-c598-433f-9efe-f6482014381d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.633306459Z" level=info msg="Starting container: fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c" id=76adcc27-7f91-4375-8cbc-31850b90fe0d name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.640714602Z" level=info msg="Started container" PID=2228 containerID=fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c description=kube-system/kube-apiserver-pause-361043/kube-apiserver id=76adcc27-7f91-4375-8cbc-31850b90fe0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5f24ea729c91b3c22f47d9d183da6a6890d4eef08eb2d37e30bc80aff805fea
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.644538425Z" level=info msg="Created container 660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003: kube-system/kube-scheduler-pause-361043/kube-scheduler" id=7e952e1a-af91-4226-ac3e-60b8bc850b23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.647136212Z" level=info msg="Created container a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32: kube-system/kube-controller-manager-pause-361043/kube-controller-manager" id=d5476cb0-24eb-4749-a7a0-c3ba5b65e8e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.650227094Z" level=info msg="Starting container: 660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003" id=bde03497-9c19-4638-956a-ce8ced86a76c name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.650713484Z" level=info msg="Starting container: a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32" id=e16c9e54-94cd-4e00-ab9e-60744131d69f name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.658916928Z" level=info msg="Started container" PID=2224 containerID=660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003 description=kube-system/kube-scheduler-pause-361043/kube-scheduler id=bde03497-9c19-4638-956a-ce8ced86a76c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ae32023727319efe10177b16746b9f399b0609c61308150fda679eee1967c92
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.666085002Z" level=info msg="Started container" PID=2237 containerID=a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32 description=kube-system/kube-controller-manager-pause-361043/kube-controller-manager id=e16c9e54-94cd-4e00-ab9e-60744131d69f name=/runtime.v1.RuntimeService/StartContainer sandboxID=26e00dbaf68a708e39bd85c37c963d66536f9c0351c0ea3024d5fbb700035376
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.888788807Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893566555Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893621325Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893675701Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.897203758Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.897241436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.89726597Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900759343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900796414Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900837646Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.904088677Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.904120038Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a6a670e4b96d4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   26e00dbaf68a7       kube-controller-manager-pause-361043   kube-system
	fbcaf7a84e439       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   b5f24ea729c91       kube-apiserver-pause-361043            kube-system
	23cf2e7d80493       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   22734e2dda324       etcd-pause-361043                      kube-system
	660ef0bec4428       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   9ae3202372731       kube-scheduler-pause-361043            kube-system
	ec1e6454c986c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   9565642c79b38       coredns-66bc5c9577-q7c7w               kube-system
	0656608153f0e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   2caa569e9d1e8       kube-proxy-fzbsz                       kube-system
	0b6f7661d8bed       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   60c6d59c1e5a2       kindnet-9c498                          kube-system
	29d1f875e07fb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   9565642c79b38       coredns-66bc5c9577-q7c7w               kube-system
	fe32c1a3bade0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2caa569e9d1e8       kube-proxy-fzbsz                       kube-system
	aa095dcde47c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   60c6d59c1e5a2       kindnet-9c498                          kube-system
	5f45c9528a708       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   9ae3202372731       kube-scheduler-pause-361043            kube-system
	055ff8e53d7f5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   26e00dbaf68a7       kube-controller-manager-pause-361043   kube-system
	a49c5d87c8b81       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   22734e2dda324       etcd-pause-361043                      kube-system
	8d254f086a1c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b5f24ea729c91       kube-apiserver-pause-361043            kube-system
	
	
	==> coredns [29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59355 - 9609 "HINFO IN 370189398641508826.7782346586602368151. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012875097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43828 - 3659 "HINFO IN 5811801055065340421.2246031547546777298. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012540767s
	
	
	==> describe nodes <==
	Name:               pause-361043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-361043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=pause-361043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_01_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:01:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-361043
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:02:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-361043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf347392368842a7bb7f131e8ecf1a1f
	  System UUID:                f23f452e-5c6e-4e29-afd4-8f1ca889b956
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q7c7w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-361043                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-9c498                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-361043             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-pause-361043    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-fzbsz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-361043             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-361043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-361043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-361043 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-361043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-361043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-361043 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                node-controller  Node pause-361043 event: Registered Node pause-361043 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-361043 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-361043 event: Registered Node pause-361043 in Controller
	
	
	==> dmesg <==
	[  +4.162636] overlayfs: idmapped layers are currently not supported
	[Oct14 20:26] overlayfs: idmapped layers are currently not supported
	[Oct14 20:34] overlayfs: idmapped layers are currently not supported
	[  +3.850661] overlayfs: idmapped layers are currently not supported
	[Oct14 20:35] overlayfs: idmapped layers are currently not supported
	[Oct14 20:36] overlayfs: idmapped layers are currently not supported
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22] <==
	{"level":"warn","ts":"2025-10-14T21:02:49.414046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.433550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.449344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.468148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.490723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.500965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.521033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.535694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.564469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.578405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.621999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.624139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.642141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.668653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.682861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.703840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.724526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.738565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.756633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.773994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.812263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.843486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.880203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.884857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.938363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	
	
	==> etcd [a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa] <==
	{"level":"warn","ts":"2025-10-14T21:01:42.025346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.045557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.068015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.100578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.112999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.165435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.252399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-14T21:02:37.047309Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-14T21:02:37.047365Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-361043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-14T21:02:37.047455Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T21:02:37.191890Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192263Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.192080Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192190Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192446Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192488Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.192534Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-14T21:02:37.192571Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192307Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.195939Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-14T21:02:37.196066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.196160Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-14T21:02:37.196194Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-361043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 21:03:06 up  3:45,  0 user,  load average: 2.36, 2.41, 2.06
	Linux pause-361043 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78] <==
	I1014 21:02:44.681093       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:02:44.681701       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:02:44.681844       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:02:44.681856       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:02:44.681871       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:02:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:02:44.888244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:02:44.889955       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:02:44.902455       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:02:44.903447       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1014 21:02:51.103862       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:02:51.103969       1 metrics.go:72] Registering metrics
	I1014 21:02:51.104054       1 controller.go:711] "Syncing nftables rules"
	I1014 21:02:54.888286       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:02:54.888435       1 main.go:301] handling current node
	I1014 21:03:04.888390       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:03:04.888452       1 main.go:301] handling current node
	
	
	==> kindnet [aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582] <==
	I1014 21:01:51.678097       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:01:51.679589       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:01:51.681999       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:01:51.682102       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:01:51.682148       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:01:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:01:51.875804       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:01:51.875823       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:01:51.875832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:01:51.876176       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:02:21.875989       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:02:21.876111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:02:21.876242       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:02:21.877341       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:02:23.375994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:02:23.376026       1 metrics.go:72] Registering metrics
	I1014 21:02:23.376096       1 controller.go:711] "Syncing nftables rules"
	I1014 21:02:31.875875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:02:31.875934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6] <==
	W1014 21:02:37.070413       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070459       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070510       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070555       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070602       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070663       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070709       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070754       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.071564       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.072955       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.072992       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073017       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073040       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073065       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073091       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073115       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073140       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073166       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073197       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073223       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073254       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073280       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073307       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073333       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c] <==
	I1014 21:02:50.926340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 21:02:51.011358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:02:51.028568       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:02:51.028679       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:02:51.028714       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:02:51.028893       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 21:02:51.028965       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:02:51.029025       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:02:51.041437       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:02:51.042497       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 21:02:51.045141       1 policy_source.go:240] refreshing policies
	I1014 21:02:51.096969       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:02:51.114762       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:02:51.126382       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:02:51.126493       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:02:51.126690       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:02:51.129358       1 cache.go:39] Caches are synced for autoregister controller
	E1014 21:02:51.130321       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:02:51.132818       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:02:51.657593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:02:51.908623       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:02:53.306690       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:02:53.507230       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:02:53.606669       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:02:53.711774       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042] <==
	I1014 21:01:50.051082       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:01:50.052776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:01:50.058435       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:01:50.074051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:01:50.078741       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:01:50.094235       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:01:50.094600       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:01:50.096480       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:01:50.096529       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:01:50.096559       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:01:50.096629       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:01:50.096681       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:01:50.097702       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:01:50.097892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:01:50.098212       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:01:50.098618       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:01:50.098774       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:01:50.098807       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1014 21:01:50.098867       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1014 21:01:50.098896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1014 21:01:50.098923       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1014 21:01:50.099850       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:01:50.100402       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:01:50.112429       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:02:35.060484       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32] <==
	I1014 21:02:53.309919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:02:53.309985       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:02:53.310013       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:02:53.310023       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:02:53.310029       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:02:53.311365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:02:53.321637       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:02:53.322673       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:02:53.325825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:02:53.325980       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:02:53.329154       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:02:53.329164       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:02:53.332566       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:02:53.333808       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:02:53.336083       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:02:53.337258       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:02:53.340568       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:02:53.343832       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:02:53.347430       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:02:53.348202       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:02:53.348361       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:02:53.348453       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:02:53.351519       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:02:53.353644       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:02:53.355876       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42] <==
	I1014 21:02:44.709605       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:02:46.199666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:02:51.199831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:02:51.199896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:02:51.199961       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:02:51.264707       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:02:51.264823       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:02:51.272604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:02:51.273049       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:02:51.274443       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:02:51.279163       1 config.go:200] "Starting service config controller"
	I1014 21:02:51.280303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:02:51.280571       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:02:51.280602       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:02:51.280653       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:02:51.280703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:02:51.285301       1 config.go:309] "Starting node config controller"
	I1014 21:02:51.287324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:02:51.287416       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:02:51.380932       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:02:51.382527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:02:51.382564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8] <==
	I1014 21:01:51.676940       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:01:51.777975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:01:51.880143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:01:51.880196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:01:51.880304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:01:51.995888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:01:51.995996       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:01:52.007592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:01:52.007911       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:01:52.007924       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:01:52.009487       1 config.go:200] "Starting service config controller"
	I1014 21:01:52.009498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:01:52.009513       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:01:52.009518       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:01:52.009531       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:01:52.009537       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:01:52.019435       1 config.go:309] "Starting node config controller"
	I1014 21:01:52.019456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:01:52.019465       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:01:52.112454       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:01:52.114279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:01:52.114362       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7] <==
	E1014 21:01:43.745941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:01:43.746071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:01:43.746965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:01:43.747088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:01:43.747231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:01:43.747466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:01:43.747570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:01:43.747628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:01:43.747679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:01:43.747758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:01:43.747809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:01:43.747870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:01:43.747925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:01:43.748056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:01:43.748128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:01:43.748200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:01:43.748226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:01:43.748960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1014 21:01:45.135950       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:37.059148       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 21:02:37.059279       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1014 21:02:37.059308       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1014 21:02:37.059335       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:37.059672       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1014 21:02:37.059692       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003] <==
	I1014 21:02:47.343792       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:02:50.960747       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:02:50.960852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:02:50.960889       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:02:50.960932       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:02:51.044262       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:02:51.045309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:02:51.047680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:02:51.052406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:51.056844       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:51.052445       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:02:51.157613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.350612    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6ea7ed8bc1f3dd79179be12356e4393e" pod="kube-system/kube-scheduler-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.350902    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.351176    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.351720    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9c498\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: I1014 21:02:44.368767    1302 scope.go:117] "RemoveContainer" containerID="8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.369244    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.369864    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9c498\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370136    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzbsz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e0e8d997-af68-4fec-9867-c0a08098f401" pod="kube-system/kube-proxy-fzbsz"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370416    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q7c7w\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b735bb9e-6349-42f7-bcb7-469aa10cf818" pod="kube-system/coredns-66bc5c9577-q7c7w"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370793    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2c77ac6e452a94256a526b5c17c9d836" pod="kube-system/kube-controller-manager-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.371059    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6ea7ed8bc1f3dd79179be12356e4393e" pod="kube-system/kube-scheduler-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.371374    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.767517    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768274    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768710    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768834    1302 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.861314    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.940943    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9c498\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.966138    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-fzbsz\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="e0e8d997-af68-4fec-9867-c0a08098f401" pod="kube-system/kube-proxy-fzbsz"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.985743    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q7c7w\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="b735bb9e-6349-42f7-bcb7-469aa10cf818" pod="kube-system/coredns-66bc5c9577-q7c7w"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.999049    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="2c77ac6e452a94256a526b5c17c9d836" pod="kube-system/kube-controller-manager-pause-361043"
	Oct 14 21:02:56 pause-361043 kubelet[1302]: W1014 21:02:56.363372    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 14 21:03:03 pause-361043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:03:03 pause-361043 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:03:03 pause-361043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-361043 -n pause-361043
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-361043 -n pause-361043: exit status 2 (466.249594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-361043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-361043
helpers_test.go:243: (dbg) docker inspect pause-361043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b",
	        "Created": "2025-10-14T21:01:21.788832788Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702695,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:01:21.858603225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b/30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b-json.log",
	        "Name": "/pause-361043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-361043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-361043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30fbd794e3c16303994914ecbf5d4ffbcb06c36c9d17ba6327ec7862f26c9a1b",
	                "LowerDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f057a2b18fcc42e39e64e7507ef97ce77f29a98169d3a063aa4d416c0ecc9ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-361043",
	                "Source": "/var/lib/docker/volumes/pause-361043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-361043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-361043",
	                "name.minikube.sigs.k8s.io": "pause-361043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99702bf6229b2670218018918bb2ab6c88ab78094dbaa12a08c456125cfed0a0",
	            "SandboxKey": "/var/run/docker/netns/99702bf6229b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33757"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33758"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33761"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33759"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33760"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-361043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:ba:32:e2:c9:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d18a8d03867b6fb9628f9a0a89945c5a6d5f894c5375753085a048b866d2bee",
	                    "EndpointID": "ac07ba91ec238c308232d1b00e420101c6b77f1ad39b3bc714ac7b76bb844ce8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-361043",
	                        "30fbd794e3c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-361043 -n pause-361043
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-361043 -n pause-361043: exit status 2 (352.866546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-361043 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-361043 logs -n 25: (1.38617645s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-925071 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:56 UTC │ 14 Oct 25 20:56 UTC │
	│ start   │ -p missing-upgrade-558395 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-558395    │ jenkins │ v1.32.0 │ 14 Oct 25 20:56 UTC │ 14 Oct 25 20:57 UTC │
	│ start   │ -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:57 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p missing-upgrade-558395 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-558395    │ jenkins │ v1.37.0 │ 14 Oct 25 20:57 UTC │ 14 Oct 25 20:58 UTC │
	│ delete  │ -p missing-upgrade-558395                                                                                                                │ missing-upgrade-558395    │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ delete  │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:59 UTC │
	│ stop    │ -p kubernetes-upgrade-791601                                                                                                             │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │ 14 Oct 25 20:58 UTC │
	│ start   │ -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-791601 │ jenkins │ v1.37.0 │ 14 Oct 25 20:58 UTC │                     │
	│ ssh     │ -p NoKubernetes-925071 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │                     │
	│ stop    │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p NoKubernetes-925071 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ ssh     │ -p NoKubernetes-925071 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │                     │
	│ delete  │ -p NoKubernetes-925071                                                                                                                   │ NoKubernetes-925071       │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p stopped-upgrade-755343 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-755343    │ jenkins │ v1.32.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ stop    │ stopped-upgrade-755343 stop                                                                                                              │ stopped-upgrade-755343    │ jenkins │ v1.32.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 20:59 UTC │
	│ start   │ -p stopped-upgrade-755343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-755343    │ jenkins │ v1.37.0 │ 14 Oct 25 20:59 UTC │ 14 Oct 25 21:00 UTC │
	│ delete  │ -p stopped-upgrade-755343                                                                                                                │ stopped-upgrade-755343    │ jenkins │ v1.37.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:00 UTC │
	│ start   │ -p running-upgrade-274014 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-274014    │ jenkins │ v1.32.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:00 UTC │
	│ start   │ -p running-upgrade-274014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-274014    │ jenkins │ v1.37.0 │ 14 Oct 25 21:00 UTC │ 14 Oct 25 21:01 UTC │
	│ delete  │ -p running-upgrade-274014                                                                                                                │ running-upgrade-274014    │ jenkins │ v1.37.0 │ 14 Oct 25 21:01 UTC │ 14 Oct 25 21:01 UTC │
	│ start   │ -p pause-361043 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:01 UTC │ 14 Oct 25 21:02 UTC │
	│ start   │ -p pause-361043 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:02 UTC │ 14 Oct 25 21:03 UTC │
	│ pause   │ -p pause-361043 --alsologtostderr -v=5                                                                                                   │ pause-361043              │ jenkins │ v1.37.0 │ 14 Oct 25 21:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:02:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:02:35.328609  706484 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:02:35.328788  706484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:02:35.328798  706484 out.go:374] Setting ErrFile to fd 2...
	I1014 21:02:35.328804  706484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:02:35.329080  706484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:02:35.329471  706484 out.go:368] Setting JSON to false
	I1014 21:02:35.330605  706484 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13506,"bootTime":1760462250,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:02:35.330690  706484 start.go:141] virtualization:  
	I1014 21:02:35.333797  706484 out.go:179] * [pause-361043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:02:35.337727  706484 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:02:35.337830  706484 notify.go:220] Checking for updates...
	I1014 21:02:35.344273  706484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:02:35.347180  706484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:02:35.350168  706484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:02:35.353145  706484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:02:35.356062  706484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:02:35.359765  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:35.360429  706484 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:02:35.382488  706484 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:02:35.382678  706484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:02:35.448777  706484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:02:35.439240457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:02:35.448904  706484 docker.go:318] overlay module found
	I1014 21:02:35.452116  706484 out.go:179] * Using the docker driver based on existing profile
	I1014 21:02:35.454906  706484 start.go:305] selected driver: docker
	I1014 21:02:35.454926  706484 start.go:925] validating driver "docker" against &{Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:35.455055  706484 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:02:35.455192  706484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:02:35.515367  706484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:02:35.505037388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:02:35.515816  706484 cni.go:84] Creating CNI manager for ""
	I1014 21:02:35.515883  706484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:02:35.515929  706484 start.go:349] cluster config:
	{Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:35.520941  706484 out.go:179] * Starting "pause-361043" primary control-plane node in "pause-361043" cluster
	I1014 21:02:35.523837  706484 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:02:35.526716  706484 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:02:35.529649  706484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:02:35.529712  706484 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:02:35.529727  706484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:02:35.529741  706484 cache.go:58] Caching tarball of preloaded images
	I1014 21:02:35.529829  706484 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:02:35.529839  706484 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:02:35.529971  706484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/config.json ...
	I1014 21:02:35.557784  706484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:02:35.557808  706484 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:02:35.557838  706484 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:02:35.557861  706484 start.go:360] acquireMachinesLock for pause-361043: {Name:mk981938910bdc0a2c695c21ebe32bffff4c7396 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:02:35.557933  706484 start.go:364] duration metric: took 48.591µs to acquireMachinesLock for "pause-361043"
	I1014 21:02:35.557956  706484 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:02:35.557968  706484 fix.go:54] fixHost starting: 
	I1014 21:02:35.558252  706484 cli_runner.go:164] Run: docker container inspect pause-361043 --format={{.State.Status}}
	I1014 21:02:35.575142  706484 fix.go:112] recreateIfNeeded on pause-361043: state=Running err=<nil>
	W1014 21:02:35.575178  706484 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:02:34.795160  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:36.748706  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:51520->192.168.85.2:8443: read: connection reset by peer
	I1014 21:02:36.748762  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:36.748823  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:36.789476  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:36.789495  688786 cri.go:89] found id: ""
	I1014 21:02:36.789503  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:36.789563  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.795222  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:36.795295  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:36.828214  688786 cri.go:89] found id: ""
	I1014 21:02:36.828235  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.828243  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:36.828250  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:36.828338  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:36.865761  688786 cri.go:89] found id: ""
	I1014 21:02:36.865798  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.865808  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:36.865818  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:36.865883  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:36.907394  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:36.907417  688786 cri.go:89] found id: ""
	I1014 21:02:36.907426  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:36.907482  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.912891  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:36.912960  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:36.956034  688786 cri.go:89] found id: ""
	I1014 21:02:36.956061  688786 logs.go:282] 0 containers: []
	W1014 21:02:36.956071  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:36.956077  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:36.956143  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:36.988130  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:36.988155  688786 cri.go:89] found id: "569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:36.988160  688786 cri.go:89] found id: ""
	I1014 21:02:36.988168  688786 logs.go:282] 2 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91]
	I1014 21:02:36.988261  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.992198  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:36.995946  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:36.996017  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:37.032956  688786 cri.go:89] found id: ""
	I1014 21:02:37.032978  688786 logs.go:282] 0 containers: []
	W1014 21:02:37.032986  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:37.032992  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:37.033051  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:37.081811  688786 cri.go:89] found id: ""
	I1014 21:02:37.081836  688786 logs.go:282] 0 containers: []
	W1014 21:02:37.081846  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:37.081860  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:37.081872  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:37.178939  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:37.178957  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:37.178969  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:37.217522  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:37.217554  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:37.275912  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:37.275950  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:37.304387  688786 logs.go:123] Gathering logs for kube-controller-manager [569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91] ...
	I1014 21:02:37.304415  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:37.331033  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:37.331065  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:37.364484  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:37.364514  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:37.476455  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:37.476489  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:37.494542  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:37.494569  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:35.578559  706484 out.go:252] * Updating the running docker "pause-361043" container ...
	I1014 21:02:35.578590  706484 machine.go:93] provisionDockerMachine start ...
	I1014 21:02:35.578682  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.595764  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.596122  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.596136  706484 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:02:35.743995  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-361043
	
	I1014 21:02:35.744016  706484 ubuntu.go:182] provisioning hostname "pause-361043"
	I1014 21:02:35.744078  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.761154  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.761458  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.761484  706484 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-361043 && echo "pause-361043" | sudo tee /etc/hostname
	I1014 21:02:35.915096  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-361043
	
	I1014 21:02:35.915189  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:35.934670  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:35.934991  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:35.935021  706484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-361043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-361043/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-361043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:02:36.084721  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:02:36.084749  706484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:02:36.084778  706484 ubuntu.go:190] setting up certificates
	I1014 21:02:36.084788  706484 provision.go:84] configureAuth start
	I1014 21:02:36.084851  706484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-361043
	I1014 21:02:36.103487  706484 provision.go:143] copyHostCerts
	I1014 21:02:36.103561  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:02:36.103576  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:02:36.103655  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:02:36.103760  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:02:36.103772  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:02:36.103800  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:02:36.103867  706484 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:02:36.103877  706484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:02:36.103904  706484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:02:36.103958  706484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.pause-361043 san=[127.0.0.1 192.168.76.2 localhost minikube pause-361043]
	I1014 21:02:36.638106  706484 provision.go:177] copyRemoteCerts
	I1014 21:02:36.638175  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:02:36.638218  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:36.655818  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:36.761114  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:02:36.782443  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 21:02:36.804103  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:02:36.836122  706484 provision.go:87] duration metric: took 751.307293ms to configureAuth
	I1014 21:02:36.836188  706484 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:02:36.836553  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:36.836702  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:36.861250  706484 main.go:141] libmachine: Using SSH client type: native
	I1014 21:02:36.861562  706484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33757 <nil> <nil>}
	I1014 21:02:36.861577  706484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:02:40.060157  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:40.060952  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:40.061032  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:40.061131  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:40.096504  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:40.096526  688786 cri.go:89] found id: ""
	I1014 21:02:40.096535  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:40.096603  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.101441  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:40.101524  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:40.132721  688786 cri.go:89] found id: ""
	I1014 21:02:40.132749  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.132759  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:40.132766  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:40.132834  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:40.164958  688786 cri.go:89] found id: ""
	I1014 21:02:40.164986  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.164995  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:40.165002  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:40.165067  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:40.194049  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:40.194071  688786 cri.go:89] found id: ""
	I1014 21:02:40.194080  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:40.194138  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.198247  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:40.198320  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:40.225200  688786 cri.go:89] found id: ""
	I1014 21:02:40.225226  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.225234  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:40.225240  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:40.225297  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:40.252152  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:40.252176  688786 cri.go:89] found id: "569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:40.252181  688786 cri.go:89] found id: ""
	I1014 21:02:40.252193  688786 logs.go:282] 2 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91]
	I1014 21:02:40.252247  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.256028  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:40.259583  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:40.259656  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:40.284923  688786 cri.go:89] found id: ""
	I1014 21:02:40.284948  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.284957  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:40.284963  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:40.285018  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:40.315678  688786 cri.go:89] found id: ""
	I1014 21:02:40.315703  688786 logs.go:282] 0 containers: []
	W1014 21:02:40.315712  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:40.315726  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:40.315739  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:40.348297  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:40.348347  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:40.460256  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:40.460289  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:40.524641  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:40.524664  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:40.524677  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:40.592499  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:40.592535  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:40.611040  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:40.611087  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:40.645800  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:40.645833  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:40.672096  688786 logs.go:123] Gathering logs for kube-controller-manager [569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91] ...
	I1014 21:02:40.672125  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 569a58ec6039ad2001138c23001f6758c0c78a15710fb1bbd05c0cd6bf081e91"
	I1014 21:02:40.699928  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:40.699955  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:43.259936  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:43.260348  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:43.260393  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:43.260451  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:43.298692  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:43.298710  688786 cri.go:89] found id: ""
	I1014 21:02:43.298717  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:43.298778  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.305594  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:43.305659  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:43.339238  688786 cri.go:89] found id: ""
	I1014 21:02:43.339259  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.339267  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:43.339273  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:43.339331  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:43.374353  688786 cri.go:89] found id: ""
	I1014 21:02:43.374373  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.374381  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:43.374388  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:43.374442  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:42.244942  706484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:02:42.244967  706484 machine.go:96] duration metric: took 6.666367806s to provisionDockerMachine
	I1014 21:02:42.244978  706484 start.go:293] postStartSetup for "pause-361043" (driver="docker")
	I1014 21:02:42.244988  706484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:02:42.245053  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:02:42.245103  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.264616  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.368833  706484 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:02:42.372441  706484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:02:42.372474  706484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:02:42.372496  706484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:02:42.372552  706484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:02:42.372633  706484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:02:42.372741  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:02:42.380335  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:02:42.398088  706484 start.go:296] duration metric: took 153.094105ms for postStartSetup
	I1014 21:02:42.398218  706484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:02:42.398265  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.414920  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.517879  706484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:02:42.523459  706484 fix.go:56] duration metric: took 6.965487793s for fixHost
	I1014 21:02:42.523485  706484 start.go:83] releasing machines lock for "pause-361043", held for 6.965540339s
	I1014 21:02:42.523571  706484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-361043
	I1014 21:02:42.546401  706484 ssh_runner.go:195] Run: cat /version.json
	I1014 21:02:42.546451  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.546747  706484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:02:42.546801  706484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-361043
	I1014 21:02:42.570468  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.580899  706484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33757 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/pause-361043/id_rsa Username:docker}
	I1014 21:02:42.672375  706484 ssh_runner.go:195] Run: systemctl --version
	I1014 21:02:42.772519  706484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:02:42.814978  706484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:02:42.819450  706484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:02:42.819566  706484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:02:42.827766  706484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:02:42.827787  706484 start.go:495] detecting cgroup driver to use...
	I1014 21:02:42.827818  706484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:02:42.827864  706484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:02:42.843664  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:02:42.857570  706484 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:02:42.857653  706484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:02:42.873200  706484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:02:42.886511  706484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:02:43.013628  706484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:02:43.153959  706484 docker.go:234] disabling docker service ...
	I1014 21:02:43.154063  706484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:02:43.169482  706484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:02:43.182580  706484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:02:43.354240  706484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:02:43.507120  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:02:43.520712  706484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:02:43.538125  706484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:02:43.538284  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.549287  706484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:02:43.549440  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.561831  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.571258  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.580035  706484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:02:43.588673  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.598220  706484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.607665  706484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:02:43.626947  706484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:02:43.642739  706484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:02:43.654185  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:43.821763  706484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:02:44.029280  706484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:02:44.029353  706484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:02:44.035512  706484 start.go:563] Will wait 60s for crictl version
	I1014 21:02:44.035582  706484 ssh_runner.go:195] Run: which crictl
	I1014 21:02:44.041230  706484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:02:44.069027  706484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:02:44.069107  706484 ssh_runner.go:195] Run: crio --version
	I1014 21:02:44.106385  706484 ssh_runner.go:195] Run: crio --version
	I1014 21:02:44.146674  706484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:02:44.149616  706484 cli_runner.go:164] Run: docker network inspect pause-361043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:02:44.171262  706484 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:02:44.175218  706484 kubeadm.go:883] updating cluster {Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:02:44.175362  706484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:02:44.175425  706484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:02:44.210763  706484 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:02:44.210788  706484 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:02:44.210844  706484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:02:44.240574  706484 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:02:44.240600  706484 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:02:44.240610  706484 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:02:44.240711  706484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-361043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:02:44.240795  706484 ssh_runner.go:195] Run: crio config
	I1014 21:02:44.295973  706484 cni.go:84] Creating CNI manager for ""
	I1014 21:02:44.296001  706484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:02:44.296017  706484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:02:44.296074  706484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-361043 NodeName:pause-361043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:02:44.296214  706484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-361043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:02:44.296292  706484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:02:44.305110  706484 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:02:44.305178  706484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:02:44.312852  706484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1014 21:02:44.340619  706484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:02:44.360728  706484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1014 21:02:44.384620  706484 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:02:44.389734  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:44.615182  706484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:02:44.652609  706484 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043 for IP: 192.168.76.2
	I1014 21:02:44.652632  706484 certs.go:195] generating shared ca certs ...
	I1014 21:02:44.652648  706484 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:44.652784  706484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:02:44.652837  706484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:02:44.652850  706484 certs.go:257] generating profile certs ...
	I1014 21:02:44.652937  706484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key
	I1014 21:02:44.653002  706484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.key.04b348e2
	I1014 21:02:44.653048  706484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.key
	I1014 21:02:44.653162  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:02:44.653196  706484 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:02:44.653208  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:02:44.653233  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:02:44.653259  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:02:44.653285  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:02:44.653329  706484 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:02:44.653915  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:02:44.697188  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:02:44.733299  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:02:44.764416  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:02:44.792879  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 21:02:44.824026  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:02:44.857514  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:02:44.890074  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:02:44.919963  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:02:44.956500  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:02:44.976935  706484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:02:45.003667  706484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:02:45.037000  706484 ssh_runner.go:195] Run: openssl version
	I1014 21:02:45.046343  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:02:45.058374  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.064175  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.064360  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:02:45.120177  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:02:45.133541  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:02:45.147775  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.157018  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.157165  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:02:45.216626  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:02:45.230494  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:02:45.243084  706484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.249373  706484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.249549  706484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:02:45.300850  706484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:02:45.312078  706484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:02:45.319420  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:02:45.395790  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:02:45.450593  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:02:45.502110  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:02:45.564364  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:02:45.608763  706484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:02:45.655050  706484 kubeadm.go:400] StartCluster: {Name:pause-361043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-361043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:02:45.655249  706484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:02:45.655349  706484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:02:45.687638  706484 cri.go:89] found id: "a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32"
	I1014 21:02:45.687712  706484 cri.go:89] found id: "fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c"
	I1014 21:02:45.687734  706484 cri.go:89] found id: "23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22"
	I1014 21:02:45.687753  706484 cri.go:89] found id: "660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003"
	I1014 21:02:45.687787  706484 cri.go:89] found id: "ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245"
	I1014 21:02:45.687819  706484 cri.go:89] found id: "0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42"
	I1014 21:02:45.687837  706484 cri.go:89] found id: "0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78"
	I1014 21:02:45.687867  706484 cri.go:89] found id: "29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5"
	I1014 21:02:45.687888  706484 cri.go:89] found id: "fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8"
	I1014 21:02:45.687912  706484 cri.go:89] found id: "aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582"
	I1014 21:02:45.687929  706484 cri.go:89] found id: "5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7"
	I1014 21:02:45.687960  706484 cri.go:89] found id: "055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042"
	I1014 21:02:45.687978  706484 cri.go:89] found id: "a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa"
	I1014 21:02:45.687999  706484 cri.go:89] found id: "8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	I1014 21:02:45.688031  706484 cri.go:89] found id: ""
	I1014 21:02:45.688118  706484 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:02:45.701440  706484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:02:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:02:45.701599  706484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:02:45.710873  706484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:02:45.710948  706484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:02:45.711031  706484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:02:45.719575  706484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:02:45.720387  706484 kubeconfig.go:125] found "pause-361043" server: "https://192.168.76.2:8443"
	I1014 21:02:45.721370  706484 kapi.go:59] client config for pause-361043: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 21:02:45.722279  706484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 21:02:45.722332  706484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 21:02:45.722354  706484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 21:02:45.722378  706484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 21:02:45.722413  706484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 21:02:45.724271  706484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:02:45.734384  706484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1014 21:02:45.734466  706484 kubeadm.go:601] duration metric: took 23.49702ms to restartPrimaryControlPlane
	I1014 21:02:45.734491  706484 kubeadm.go:402] duration metric: took 79.450371ms to StartCluster
	I1014 21:02:45.734535  706484 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:45.734632  706484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:02:45.735617  706484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:02:45.735942  706484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:02:45.736517  706484 config.go:182] Loaded profile config "pause-361043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:02:45.736675  706484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:02:45.739201  706484 out.go:179] * Verifying Kubernetes components...
	I1014 21:02:45.739289  706484 out.go:179] * Enabled addons: 
	I1014 21:02:43.431633  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:43.431653  688786 cri.go:89] found id: ""
	I1014 21:02:43.431661  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:43.431715  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.435792  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:43.435859  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:43.462770  688786 cri.go:89] found id: ""
	I1014 21:02:43.462792  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.462800  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:43.462806  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:43.462867  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:43.493417  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:43.493437  688786 cri.go:89] found id: ""
	I1014 21:02:43.493444  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:43.493499  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:43.498596  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:43.498717  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:43.553779  688786 cri.go:89] found id: ""
	I1014 21:02:43.553855  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.553879  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:43.553902  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:43.554021  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:43.598002  688786 cri.go:89] found id: ""
	I1014 21:02:43.598028  688786 logs.go:282] 0 containers: []
	W1014 21:02:43.598037  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:43.598046  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:43.598056  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:43.731962  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:43.732000  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:43.754726  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:43.754757  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:43.849414  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:43.849437  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:43.849450  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:43.897221  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:43.897297  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:43.978844  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:43.978919  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:44.011861  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:44.011908  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:44.084191  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:44.084232  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:46.628366  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:46.628756  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:46.628793  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:46.628845  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:46.673754  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:46.673773  688786 cri.go:89] found id: ""
	I1014 21:02:46.673781  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:46.673836  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.680141  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:46.680213  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:46.725836  688786 cri.go:89] found id: ""
	I1014 21:02:46.725858  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.725866  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:46.725872  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:46.725935  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:46.789793  688786 cri.go:89] found id: ""
	I1014 21:02:46.789815  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.789823  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:46.789829  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:46.789886  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:46.848694  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:46.848713  688786 cri.go:89] found id: ""
	I1014 21:02:46.848721  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:46.848774  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.853781  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:46.853852  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:46.899768  688786 cri.go:89] found id: ""
	I1014 21:02:46.899788  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.899796  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:46.899802  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:46.899858  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:46.927426  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:46.927445  688786 cri.go:89] found id: ""
	I1014 21:02:46.927453  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:46.927511  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:46.931592  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:46.931709  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:46.991428  688786 cri.go:89] found id: ""
	I1014 21:02:46.991496  688786 logs.go:282] 0 containers: []
	W1014 21:02:46.991518  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:46.991540  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:46.991630  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:47.024967  688786 cri.go:89] found id: ""
	I1014 21:02:47.025046  688786 logs.go:282] 0 containers: []
	W1014 21:02:47.025069  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:47.025110  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:47.025138  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:47.106553  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:47.106588  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:47.176875  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:47.176899  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:47.332904  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:47.332987  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:47.356207  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:47.356282  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:47.491980  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:47.492044  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:47.492071  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:47.547011  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:47.547108  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:47.639477  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:47.639562  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:45.741178  706484 addons.go:514] duration metric: took 4.498351ms for enable addons: enabled=[]
	I1014 21:02:45.741286  706484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:02:45.977712  706484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:02:45.992884  706484 node_ready.go:35] waiting up to 6m0s for node "pause-361043" to be "Ready" ...
	I1014 21:02:50.188421  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:50.188848  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:50.188895  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:50.188953  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:50.219510  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:50.219537  688786 cri.go:89] found id: ""
	I1014 21:02:50.219545  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:50.219600  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.224149  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:50.224222  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:50.263402  688786 cri.go:89] found id: ""
	I1014 21:02:50.263430  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.263438  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:50.263444  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:50.263501  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:50.307374  688786 cri.go:89] found id: ""
	I1014 21:02:50.307403  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.307412  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:50.307419  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:50.307478  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:50.338712  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:50.338738  688786 cri.go:89] found id: ""
	I1014 21:02:50.338746  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:50.338801  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.343287  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:50.343364  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:50.372366  688786 cri.go:89] found id: ""
	I1014 21:02:50.372394  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.372402  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:50.372408  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:50.372464  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:50.402116  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:50.402142  688786 cri.go:89] found id: ""
	I1014 21:02:50.402151  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:50.402212  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:50.406988  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:50.407080  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:50.447089  688786 cri.go:89] found id: ""
	I1014 21:02:50.447120  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.447134  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:50.447141  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:50.447205  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:50.497891  688786 cri.go:89] found id: ""
	I1014 21:02:50.497925  688786 logs.go:282] 0 containers: []
	W1014 21:02:50.497937  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:50.497947  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:50.497959  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:50.529078  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:50.529117  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:50.647754  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:50.647778  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:50.647791  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:50.709326  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:50.709363  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:50.803709  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:50.803786  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:50.840292  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:50.840343  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:50.910857  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:50.911024  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:50.955471  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:50.955497  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:50.908034  706484 node_ready.go:49] node "pause-361043" is "Ready"
	I1014 21:02:50.908059  706484 node_ready.go:38] duration metric: took 4.915147993s for node "pause-361043" to be "Ready" ...
	I1014 21:02:50.908072  706484 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:02:50.908130  706484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:02:50.931861  706484 api_server.go:72] duration metric: took 5.19584253s to wait for apiserver process to appear ...
	I1014 21:02:50.931884  706484 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:02:50.931904  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:50.960009  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 21:02:50.960034  706484 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 21:02:51.432257  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:51.441841  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:02:51.441885  706484 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:02:51.932386  706484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:02:51.941629  706484 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 21:02:51.942633  706484 api_server.go:141] control plane version: v1.34.1
	I1014 21:02:51.942660  706484 api_server.go:131] duration metric: took 1.010768565s to wait for apiserver health ...
	I1014 21:02:51.942669  706484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:02:51.949176  706484 system_pods.go:59] 7 kube-system pods found
	I1014 21:02:51.949212  706484 system_pods.go:61] "coredns-66bc5c9577-q7c7w" [b735bb9e-6349-42f7-bcb7-469aa10cf818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:02:51.949222  706484 system_pods.go:61] "etcd-pause-361043" [11b46cdc-422e-44c6-bb9a-05a404336542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:02:51.949228  706484 system_pods.go:61] "kindnet-9c498" [6487afd8-213d-4a40-b1e4-4e6b93d23583] Running
	I1014 21:02:51.949235  706484 system_pods.go:61] "kube-apiserver-pause-361043" [adcb78e6-b272-4878-9f25-dbbc7f0d3565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:02:51.949242  706484 system_pods.go:61] "kube-controller-manager-pause-361043" [53842385-4844-45db-97ef-eaec0f03796d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:02:51.949252  706484 system_pods.go:61] "kube-proxy-fzbsz" [e0e8d997-af68-4fec-9867-c0a08098f401] Running
	I1014 21:02:51.949259  706484 system_pods.go:61] "kube-scheduler-pause-361043" [61b58669-642e-4139-a8b0-db07e49496d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:02:51.949267  706484 system_pods.go:74] duration metric: took 6.593256ms to wait for pod list to return data ...
	I1014 21:02:51.949275  706484 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:02:51.999784  706484 default_sa.go:45] found service account: "default"
	I1014 21:02:51.999815  706484 default_sa.go:55] duration metric: took 50.527783ms for default service account to be created ...
	I1014 21:02:51.999833  706484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:02:52.047564  706484 system_pods.go:86] 7 kube-system pods found
	I1014 21:02:52.047601  706484 system_pods.go:89] "coredns-66bc5c9577-q7c7w" [b735bb9e-6349-42f7-bcb7-469aa10cf818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:02:52.047612  706484 system_pods.go:89] "etcd-pause-361043" [11b46cdc-422e-44c6-bb9a-05a404336542] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:02:52.047619  706484 system_pods.go:89] "kindnet-9c498" [6487afd8-213d-4a40-b1e4-4e6b93d23583] Running
	I1014 21:02:52.047628  706484 system_pods.go:89] "kube-apiserver-pause-361043" [adcb78e6-b272-4878-9f25-dbbc7f0d3565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:02:52.047654  706484 system_pods.go:89] "kube-controller-manager-pause-361043" [53842385-4844-45db-97ef-eaec0f03796d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:02:52.047659  706484 system_pods.go:89] "kube-proxy-fzbsz" [e0e8d997-af68-4fec-9867-c0a08098f401] Running
	I1014 21:02:52.047666  706484 system_pods.go:89] "kube-scheduler-pause-361043" [61b58669-642e-4139-a8b0-db07e49496d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:02:52.047677  706484 system_pods.go:126] duration metric: took 47.837195ms to wait for k8s-apps to be running ...
	I1014 21:02:52.047690  706484 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:02:52.047750  706484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:02:52.062435  706484 system_svc.go:56] duration metric: took 14.735858ms WaitForService to wait for kubelet
	I1014 21:02:52.062462  706484 kubeadm.go:586] duration metric: took 6.326448482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:02:52.062482  706484 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:02:52.065968  706484 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:02:52.066003  706484 node_conditions.go:123] node cpu capacity is 2
	I1014 21:02:52.066017  706484 node_conditions.go:105] duration metric: took 3.529099ms to run NodePressure ...
	I1014 21:02:52.066030  706484 start.go:241] waiting for startup goroutines ...
	I1014 21:02:52.066042  706484 start.go:246] waiting for cluster config update ...
	I1014 21:02:52.066055  706484 start.go:255] writing updated cluster config ...
	I1014 21:02:52.066390  706484 ssh_runner.go:195] Run: rm -f paused
	I1014 21:02:52.070338  706484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:02:52.070963  706484 kapi.go:59] client config for pause-361043: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/profiles/pause-361043/client.key", CAFile:"/home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 21:02:52.074539  706484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7c7w" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:02:54.085710  706484 pod_ready.go:104] pod "coredns-66bc5c9577-q7c7w" is not "Ready", error: <nil>
	I1014 21:02:53.603267  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:53.603628  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:53.603668  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:53.603720  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:53.632145  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:53.632164  688786 cri.go:89] found id: ""
	I1014 21:02:53.632173  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:53.632232  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.636186  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:53.636253  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:53.666435  688786 cri.go:89] found id: ""
	I1014 21:02:53.666464  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.666473  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:53.666480  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:53.666581  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:53.693524  688786 cri.go:89] found id: ""
	I1014 21:02:53.693548  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.693557  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:53.693564  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:53.693621  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:53.726454  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:53.726477  688786 cri.go:89] found id: ""
	I1014 21:02:53.726486  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:53.726544  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.730373  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:53.730446  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:53.757223  688786 cri.go:89] found id: ""
	I1014 21:02:53.757249  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.757260  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:53.757266  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:53.757323  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:53.787437  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:53.787458  688786 cri.go:89] found id: ""
	I1014 21:02:53.787467  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:53.787529  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:53.791362  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:53.791446  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:53.820604  688786 cri.go:89] found id: ""
	I1014 21:02:53.820636  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.820645  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:53.820651  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:53.820710  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:53.848287  688786 cri.go:89] found id: ""
	I1014 21:02:53.848331  688786 logs.go:282] 0 containers: []
	W1014 21:02:53.848341  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:53.848350  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:53.848362  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:53.929431  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:53.929452  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:53.929465  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:53.965225  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:53.965255  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:54.028801  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:54.028839  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:54.058038  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:54.058067  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:54.123579  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:54.123618  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:54.155877  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:54.155901  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:54.275924  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:54.275961  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:56.796079  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:56.796577  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:56.796626  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:56.796687  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:02:56.823493  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:56.823511  688786 cri.go:89] found id: ""
	I1014 21:02:56.823518  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:02:56.823574  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.827444  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:02:56.827517  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:02:56.857837  688786 cri.go:89] found id: ""
	I1014 21:02:56.857863  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.857872  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:02:56.857878  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:02:56.857933  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:02:56.884003  688786 cri.go:89] found id: ""
	I1014 21:02:56.884025  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.884033  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:02:56.884040  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:02:56.884098  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:02:56.911730  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:56.911799  688786 cri.go:89] found id: ""
	I1014 21:02:56.911811  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:02:56.911901  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.915904  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:02:56.916029  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:02:56.942504  688786 cri.go:89] found id: ""
	I1014 21:02:56.942585  688786 logs.go:282] 0 containers: []
	W1014 21:02:56.942603  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:02:56.942610  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:02:56.942681  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:02:56.969659  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:56.969681  688786 cri.go:89] found id: ""
	I1014 21:02:56.969691  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:02:56.969771  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:02:56.973670  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:02:56.973758  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:02:57.012390  688786 cri.go:89] found id: ""
	I1014 21:02:57.012415  688786 logs.go:282] 0 containers: []
	W1014 21:02:57.012424  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:02:57.012431  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:02:57.012491  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:02:57.048661  688786 cri.go:89] found id: ""
	I1014 21:02:57.048685  688786 logs.go:282] 0 containers: []
	W1014 21:02:57.048694  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:02:57.048703  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:02:57.048715  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:02:57.081104  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:02:57.081139  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:02:57.149555  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:02:57.149591  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:02:57.177380  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:02:57.177408  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:02:57.245232  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:02:57.245278  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:02:57.278921  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:02:57.278949  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:02:57.399467  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:02:57.399504  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 21:02:57.417849  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:02:57.417882  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:02:57.490254  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:02:56.581093  706484 pod_ready.go:94] pod "coredns-66bc5c9577-q7c7w" is "Ready"
	I1014 21:02:56.581124  706484 pod_ready.go:86] duration metric: took 4.506557043s for pod "coredns-66bc5c9577-q7c7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.583709  706484 pod_ready.go:83] waiting for pod "etcd-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.587858  706484 pod_ready.go:94] pod "etcd-pause-361043" is "Ready"
	I1014 21:02:56.587887  706484 pod_ready.go:86] duration metric: took 4.150079ms for pod "etcd-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.590065  706484 pod_ready.go:83] waiting for pod "kube-apiserver-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.594215  706484 pod_ready.go:94] pod "kube-apiserver-pause-361043" is "Ready"
	I1014 21:02:56.594245  706484 pod_ready.go:86] duration metric: took 4.151843ms for pod "kube-apiserver-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:02:56.596684  706484 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:02:58.602598  706484 pod_ready.go:104] pod "kube-controller-manager-pause-361043" is not "Ready", error: <nil>
	W1014 21:03:01.102193  706484 pod_ready.go:104] pod "kube-controller-manager-pause-361043" is not "Ready", error: <nil>
	I1014 21:03:02.602627  706484 pod_ready.go:94] pod "kube-controller-manager-pause-361043" is "Ready"
	I1014 21:03:02.602654  706484 pod_ready.go:86] duration metric: took 6.005912904s for pod "kube-controller-manager-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.604940  706484 pod_ready.go:83] waiting for pod "kube-proxy-fzbsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.609472  706484 pod_ready.go:94] pod "kube-proxy-fzbsz" is "Ready"
	I1014 21:03:02.609499  706484 pod_ready.go:86] duration metric: took 4.532124ms for pod "kube-proxy-fzbsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.611854  706484 pod_ready.go:83] waiting for pod "kube-scheduler-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.778706  706484 pod_ready.go:94] pod "kube-scheduler-pause-361043" is "Ready"
	I1014 21:03:02.778737  706484 pod_ready.go:86] duration metric: took 166.856143ms for pod "kube-scheduler-pause-361043" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:03:02.778750  706484 pod_ready.go:40] duration metric: took 10.70837563s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:03:02.830930  706484 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:03:02.834155  706484 out.go:179] * Done! kubectl is now configured to use "pause-361043" cluster and "default" namespace by default
	I1014 21:02:59.991538  688786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:02:59.991935  688786 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 21:02:59.991981  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 21:02:59.992040  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 21:03:00.100543  688786 cri.go:89] found id: "a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:03:00.100624  688786 cri.go:89] found id: ""
	I1014 21:03:00.100649  688786 logs.go:282] 1 containers: [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe]
	I1014 21:03:00.100754  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.115475  688786 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 21:03:00.115558  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 21:03:00.236792  688786 cri.go:89] found id: ""
	I1014 21:03:00.236819  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.236829  688786 logs.go:284] No container was found matching "etcd"
	I1014 21:03:00.236837  688786 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 21:03:00.236908  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 21:03:00.275544  688786 cri.go:89] found id: ""
	I1014 21:03:00.275571  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.275579  688786 logs.go:284] No container was found matching "coredns"
	I1014 21:03:00.275587  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 21:03:00.275657  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 21:03:00.342716  688786 cri.go:89] found id: "cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:03:00.342749  688786 cri.go:89] found id: ""
	I1014 21:03:00.342759  688786 logs.go:282] 1 containers: [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1]
	I1014 21:03:00.342851  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.348207  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 21:03:00.348290  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 21:03:00.383298  688786 cri.go:89] found id: ""
	I1014 21:03:00.383325  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.383334  688786 logs.go:284] No container was found matching "kube-proxy"
	I1014 21:03:00.383342  688786 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 21:03:00.383404  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 21:03:00.416146  688786 cri.go:89] found id: "e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:03:00.416173  688786 cri.go:89] found id: ""
	I1014 21:03:00.416182  688786 logs.go:282] 1 containers: [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de]
	I1014 21:03:00.416252  688786 ssh_runner.go:195] Run: which crictl
	I1014 21:03:00.421039  688786 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 21:03:00.421124  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 21:03:00.452733  688786 cri.go:89] found id: ""
	I1014 21:03:00.452765  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.452774  688786 logs.go:284] No container was found matching "kindnet"
	I1014 21:03:00.452781  688786 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 21:03:00.452850  688786 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 21:03:00.483787  688786 cri.go:89] found id: ""
	I1014 21:03:00.483866  688786 logs.go:282] 0 containers: []
	W1014 21:03:00.483888  688786 logs.go:284] No container was found matching "storage-provisioner"
	I1014 21:03:00.483914  688786 logs.go:123] Gathering logs for describe nodes ...
	I1014 21:03:00.483955  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 21:03:00.574561  688786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 21:03:00.574582  688786 logs.go:123] Gathering logs for kube-apiserver [a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe] ...
	I1014 21:03:00.574597  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5a038eceb13536dcc639b112e3ac56889da564a6e482b3bb387d8b85d217efe"
	I1014 21:03:00.628481  688786 logs.go:123] Gathering logs for kube-scheduler [cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1] ...
	I1014 21:03:00.628517  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff3879a878352f030dcdd302a4ac4464b17d97207094b0e43267c13480708d1"
	I1014 21:03:00.713178  688786 logs.go:123] Gathering logs for kube-controller-manager [e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de] ...
	I1014 21:03:00.713212  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e115aeacc6d58666c376cb7c762cff6f84e6453e6219571c2eff6b42863470de"
	I1014 21:03:00.748995  688786 logs.go:123] Gathering logs for CRI-O ...
	I1014 21:03:00.749028  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 21:03:00.815682  688786 logs.go:123] Gathering logs for container status ...
	I1014 21:03:00.815714  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 21:03:00.849469  688786 logs.go:123] Gathering logs for kubelet ...
	I1014 21:03:00.849539  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 21:03:00.976667  688786 logs.go:123] Gathering logs for dmesg ...
	I1014 21:03:00.976703  688786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.532898821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.604466983Z" level=info msg="Created container 23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22: kube-system/etcd-pause-361043/etcd" id=d25b7f23-894b-4efe-96f1-6d64b3f204a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.610150187Z" level=info msg="Starting container: 23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22" id=88a9669f-e396-4f49-85d0-bf268bdff15b name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.622038859Z" level=info msg="Started container" PID=2215 containerID=23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22 description=kube-system/etcd-pause-361043/etcd id=88a9669f-e396-4f49-85d0-bf268bdff15b name=/runtime.v1.RuntimeService/StartContainer sandboxID=22734e2dda3249ff64753ab00f5a313383f9440852bd1485f849f62fdf47eacd
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.630617126Z" level=info msg="Created container fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c: kube-system/kube-apiserver-pause-361043/kube-apiserver" id=08d1a8fb-c598-433f-9efe-f6482014381d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.633306459Z" level=info msg="Starting container: fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c" id=76adcc27-7f91-4375-8cbc-31850b90fe0d name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.640714602Z" level=info msg="Started container" PID=2228 containerID=fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c description=kube-system/kube-apiserver-pause-361043/kube-apiserver id=76adcc27-7f91-4375-8cbc-31850b90fe0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5f24ea729c91b3c22f47d9d183da6a6890d4eef08eb2d37e30bc80aff805fea
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.644538425Z" level=info msg="Created container 660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003: kube-system/kube-scheduler-pause-361043/kube-scheduler" id=7e952e1a-af91-4226-ac3e-60b8bc850b23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.647136212Z" level=info msg="Created container a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32: kube-system/kube-controller-manager-pause-361043/kube-controller-manager" id=d5476cb0-24eb-4749-a7a0-c3ba5b65e8e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.650227094Z" level=info msg="Starting container: 660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003" id=bde03497-9c19-4638-956a-ce8ced86a76c name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.650713484Z" level=info msg="Starting container: a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32" id=e16c9e54-94cd-4e00-ab9e-60744131d69f name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.658916928Z" level=info msg="Started container" PID=2224 containerID=660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003 description=kube-system/kube-scheduler-pause-361043/kube-scheduler id=bde03497-9c19-4638-956a-ce8ced86a76c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ae32023727319efe10177b16746b9f399b0609c61308150fda679eee1967c92
	Oct 14 21:02:44 pause-361043 crio[2056]: time="2025-10-14T21:02:44.666085002Z" level=info msg="Started container" PID=2237 containerID=a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32 description=kube-system/kube-controller-manager-pause-361043/kube-controller-manager id=e16c9e54-94cd-4e00-ab9e-60744131d69f name=/runtime.v1.RuntimeService/StartContainer sandboxID=26e00dbaf68a708e39bd85c37c963d66536f9c0351c0ea3024d5fbb700035376
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.888788807Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893566555Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893621325Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.893675701Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.897203758Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.897241436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.89726597Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900759343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900796414Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.900837646Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.904088677Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:02:54 pause-361043 crio[2056]: time="2025-10-14T21:02:54.904120038Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a6a670e4b96d4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   26e00dbaf68a7       kube-controller-manager-pause-361043   kube-system
	fbcaf7a84e439       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   b5f24ea729c91       kube-apiserver-pause-361043            kube-system
	23cf2e7d80493       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   22734e2dda324       etcd-pause-361043                      kube-system
	660ef0bec4428       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   9ae3202372731       kube-scheduler-pause-361043            kube-system
	ec1e6454c986c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   9565642c79b38       coredns-66bc5c9577-q7c7w               kube-system
	0656608153f0e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   2caa569e9d1e8       kube-proxy-fzbsz                       kube-system
	0b6f7661d8bed       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   60c6d59c1e5a2       kindnet-9c498                          kube-system
	29d1f875e07fb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   9565642c79b38       coredns-66bc5c9577-q7c7w               kube-system
	fe32c1a3bade0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2caa569e9d1e8       kube-proxy-fzbsz                       kube-system
	aa095dcde47c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   60c6d59c1e5a2       kindnet-9c498                          kube-system
	5f45c9528a708       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   9ae3202372731       kube-scheduler-pause-361043            kube-system
	055ff8e53d7f5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   26e00dbaf68a7       kube-controller-manager-pause-361043   kube-system
	a49c5d87c8b81       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   22734e2dda324       etcd-pause-361043                      kube-system
	8d254f086a1c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b5f24ea729c91       kube-apiserver-pause-361043            kube-system
	
	
	==> coredns [29d1f875e07fb368ad19aeba36a9b146d4a9b45accd812d3388a16b14c5199c5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59355 - 9609 "HINFO IN 370189398641508826.7782346586602368151. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012875097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec1e6454c986ce31b077f25fc95ab32526d877c36bcb22d9c7e025f1cce4a245] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43828 - 3659 "HINFO IN 5811801055065340421.2246031547546777298. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012540767s
	
	
	==> describe nodes <==
	Name:               pause-361043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-361043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=pause-361043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_01_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:01:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-361043
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:02:32 +0000   Tue, 14 Oct 2025 21:02:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-361043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf347392368842a7bb7f131e8ecf1a1f
	  System UUID:                f23f452e-5c6e-4e29-afd4-8f1ca889b956
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q7c7w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-361043                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-9c498                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-361043             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-361043    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-fzbsz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-361043             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-361043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-361043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-361043 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-361043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-361043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-361043 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-361043 event: Registered Node pause-361043 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-361043 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-361043 event: Registered Node pause-361043 in Controller
	
	
	==> dmesg <==
	[  +4.162636] overlayfs: idmapped layers are currently not supported
	[Oct14 20:26] overlayfs: idmapped layers are currently not supported
	[Oct14 20:34] overlayfs: idmapped layers are currently not supported
	[  +3.850661] overlayfs: idmapped layers are currently not supported
	[Oct14 20:35] overlayfs: idmapped layers are currently not supported
	[Oct14 20:36] overlayfs: idmapped layers are currently not supported
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [23cf2e7d8049384cf8d3943f02d514c3e391d5ad704b75431865b05d3d029f22] <==
	{"level":"warn","ts":"2025-10-14T21:02:49.414046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.433550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.449344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.468148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.490723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.500965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.521033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.535694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.564469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.578405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.621999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.624139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.642141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.668653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.682861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.703840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.724526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.738565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.756633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.773994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.812263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.843486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.880203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.884857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:02:49.938363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	
	
	==> etcd [a49c5d87c8b814e895398783eb4a4f8388fdc2356900055aae236ddce0b78dfa] <==
	{"level":"warn","ts":"2025-10-14T21:01:42.025346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.045557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.068015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.100578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.112999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.165435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:01:42.252399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-14T21:02:37.047309Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-14T21:02:37.047365Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-361043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-14T21:02:37.047455Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T21:02:37.191890Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192263Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.192080Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192190Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192446Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192488Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.192534Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-14T21:02:37.192571Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-14T21:02:37.192307Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T21:02:37.192836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.195939Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-14T21:02:37.196066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T21:02:37.196160Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-14T21:02:37.196194Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-361043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 21:03:09 up  3:45,  0 user,  load average: 2.36, 2.41, 2.06
	Linux pause-361043 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b6f7661d8beda525ec254094f39f88e7c53c0c15a7ddaad9ae3d5d15de7ab78] <==
	I1014 21:02:44.681093       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:02:44.681701       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:02:44.681844       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:02:44.681856       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:02:44.681871       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:02:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:02:44.888244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:02:44.889955       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:02:44.902455       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:02:44.903447       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1014 21:02:51.103862       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:02:51.103969       1 metrics.go:72] Registering metrics
	I1014 21:02:51.104054       1 controller.go:711] "Syncing nftables rules"
	I1014 21:02:54.888286       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:02:54.888435       1 main.go:301] handling current node
	I1014 21:03:04.888390       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:03:04.888452       1 main.go:301] handling current node
	
	
	==> kindnet [aa095dcde47c4e0f4ed9486946661610675ccee537498ca0707b769339e8e582] <==
	I1014 21:01:51.678097       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:01:51.679589       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:01:51.681999       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:01:51.682102       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:01:51.682148       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:01:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:01:51.875804       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:01:51.875823       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:01:51.875832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:01:51.876176       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:02:21.875989       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:02:21.876111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:02:21.876242       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:02:21.877341       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:02:23.375994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:02:23.376026       1 metrics.go:72] Registering metrics
	I1014 21:02:23.376096       1 controller.go:711] "Syncing nftables rules"
	I1014 21:02:31.875875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:02:31.875934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6] <==
	W1014 21:02:37.070413       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070459       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070510       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070555       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070602       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070663       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070709       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.070754       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.071564       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.072955       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.072992       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073017       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073040       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073065       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073091       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073115       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073140       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073166       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073197       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073223       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073254       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073280       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073307       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 21:02:37.073333       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fbcaf7a84e4396d138d2b4f56efa1373b3bcedfc203b447d251e52b1052cba5c] <==
	I1014 21:02:50.926340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 21:02:51.011358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:02:51.028568       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:02:51.028679       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:02:51.028714       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:02:51.028893       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 21:02:51.028965       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:02:51.029025       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:02:51.041437       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:02:51.042497       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 21:02:51.045141       1 policy_source.go:240] refreshing policies
	I1014 21:02:51.096969       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:02:51.114762       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:02:51.126382       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:02:51.126493       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:02:51.126690       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:02:51.129358       1 cache.go:39] Caches are synced for autoregister controller
	E1014 21:02:51.130321       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:02:51.132818       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:02:51.657593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:02:51.908623       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:02:53.306690       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:02:53.507230       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:02:53.606669       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:02:53.711774       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [055ff8e53d7f5e81c45b6752bfdfaa25d49b2274d3c55e4ff21315a5cc1b8042] <==
	I1014 21:01:50.051082       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:01:50.052776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:01:50.058435       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:01:50.074051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:01:50.078741       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:01:50.094235       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:01:50.094600       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:01:50.096480       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:01:50.096529       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:01:50.096559       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:01:50.096629       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:01:50.096681       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:01:50.097702       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:01:50.097892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:01:50.098212       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:01:50.098618       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:01:50.098774       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:01:50.098807       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1014 21:01:50.098867       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1014 21:01:50.098896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1014 21:01:50.098923       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1014 21:01:50.099850       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:01:50.100402       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:01:50.112429       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:02:35.060484       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a6a670e4b96d401ca63818f2706aa6e56c947a842bddbb42558d305cb8007f32] <==
	I1014 21:02:53.309919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:02:53.309985       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:02:53.310013       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:02:53.310023       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:02:53.310029       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:02:53.311365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:02:53.321637       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:02:53.322673       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:02:53.325825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:02:53.325980       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:02:53.329154       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:02:53.329164       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:02:53.332566       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:02:53.333808       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:02:53.336083       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:02:53.337258       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:02:53.340568       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:02:53.343832       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:02:53.347430       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:02:53.348202       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:02:53.348361       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:02:53.348453       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:02:53.351519       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:02:53.353644       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:02:53.355876       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [0656608153f0e568502f545a2d6fcbc28356167d353ce4ef22b6e0c9db00be42] <==
	I1014 21:02:44.709605       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:02:46.199666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:02:51.199831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:02:51.199896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:02:51.199961       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:02:51.264707       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:02:51.264823       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:02:51.272604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:02:51.273049       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:02:51.274443       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:02:51.279163       1 config.go:200] "Starting service config controller"
	I1014 21:02:51.280303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:02:51.280571       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:02:51.280602       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:02:51.280653       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:02:51.280703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:02:51.285301       1 config.go:309] "Starting node config controller"
	I1014 21:02:51.287324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:02:51.287416       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:02:51.380932       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:02:51.382527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:02:51.382564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [fe32c1a3bade0fcd3521bb4bfa21cb5373816ddea996c29b888604e2792f7df8] <==
	I1014 21:01:51.676940       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:01:51.777975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:01:51.880143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:01:51.880196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:01:51.880304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:01:51.995888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:01:51.995996       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:01:52.007592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:01:52.007911       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:01:52.007924       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:01:52.009487       1 config.go:200] "Starting service config controller"
	I1014 21:01:52.009498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:01:52.009513       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:01:52.009518       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:01:52.009531       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:01:52.009537       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:01:52.019435       1 config.go:309] "Starting node config controller"
	I1014 21:01:52.019456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:01:52.019465       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:01:52.112454       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:01:52.114279       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:01:52.114362       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f45c9528a708bad4f13dd1f5f8d17082f4445a4bfd9e9b4dd4679184e33b4c7] <==
	E1014 21:01:43.745941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:01:43.746071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:01:43.746965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:01:43.747088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:01:43.747231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:01:43.747466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:01:43.747570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:01:43.747628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:01:43.747679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:01:43.747758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:01:43.747809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:01:43.747870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:01:43.747925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:01:43.748056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:01:43.748128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:01:43.748200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:01:43.748226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:01:43.748960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1014 21:01:45.135950       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:37.059148       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 21:02:37.059279       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1014 21:02:37.059308       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1014 21:02:37.059335       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:37.059672       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1014 21:02:37.059692       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [660ef0bec4428d1153fa555e4a0f7daa6380e392888604d04e0e4322d7771003] <==
	I1014 21:02:47.343792       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:02:50.960747       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:02:50.960852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:02:50.960889       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:02:50.960932       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:02:51.044262       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:02:51.045309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:02:51.047680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:02:51.052406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:51.056844       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:02:51.052445       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:02:51.157613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.350612    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6ea7ed8bc1f3dd79179be12356e4393e" pod="kube-system/kube-scheduler-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.350902    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.351176    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.351720    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9c498\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: I1014 21:02:44.368767    1302 scope.go:117] "RemoveContainer" containerID="8d254f086a1c65340f4c5ac0cbb7b76bbedcd67dd47f333308d7d82abfd769b6"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.369244    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.369864    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9c498\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370136    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzbsz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e0e8d997-af68-4fec-9867-c0a08098f401" pod="kube-system/kube-proxy-fzbsz"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370416    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q7c7w\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b735bb9e-6349-42f7-bcb7-469aa10cf818" pod="kube-system/coredns-66bc5c9577-q7c7w"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.370793    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="2c77ac6e452a94256a526b5c17c9d836" pod="kube-system/kube-controller-manager-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.371059    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6ea7ed8bc1f3dd79179be12356e4393e" pod="kube-system/kube-scheduler-pause-361043"
	Oct 14 21:02:44 pause-361043 kubelet[1302]: E1014 21:02:44.371374    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-361043\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.767517    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768274    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="ec849d8c65d9dfc160d5f2b6d4cb19f8" pod="kube-system/etcd-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768710    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.768834    1302 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-361043\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.861314    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="e805b6bd7dcfd9e2f78a53b618f01505" pod="kube-system/kube-apiserver-pause-361043"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.940943    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9c498\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="6487afd8-213d-4a40-b1e4-4e6b93d23583" pod="kube-system/kindnet-9c498"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.966138    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-fzbsz\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="e0e8d997-af68-4fec-9867-c0a08098f401" pod="kube-system/kube-proxy-fzbsz"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.985743    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-q7c7w\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="b735bb9e-6349-42f7-bcb7-469aa10cf818" pod="kube-system/coredns-66bc5c9577-q7c7w"
	Oct 14 21:02:50 pause-361043 kubelet[1302]: E1014 21:02:50.999049    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-361043\" is forbidden: User \"system:node:pause-361043\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-361043' and this object" podUID="2c77ac6e452a94256a526b5c17c9d836" pod="kube-system/kube-controller-manager-pause-361043"
	Oct 14 21:02:56 pause-361043 kubelet[1302]: W1014 21:02:56.363372    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 14 21:03:03 pause-361043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:03:03 pause-361043 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:03:03 pause-361043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-361043 -n pause-361043
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-361043 -n pause-361043: exit status 2 (358.517447ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-361043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.79461ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-624341 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-624341 describe deploy/metrics-server -n kube-system: exit status 1 (92.690896ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-624341 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-624341
helpers_test.go:243: (dbg) docker inspect old-k8s-version-624341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	        "Created": "2025-10-14T21:13:39.162471971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:13:39.227057109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hosts",
	        "LogPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e-json.log",
	        "Name": "/old-k8s-version-624341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-624341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-624341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	                "LowerDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-624341",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-624341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-624341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e971987e88678f9779b1240e771a20cd3a295c8885512af43c912144d89ca02",
	            "SandboxKey": "/var/run/docker/netns/5e971987e886",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-624341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:b7:6d:6f:e7:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8335d3f3a3006af89042686dfaf481d24c09a8bd966d06b2f14f4f84c184d6d9",
	                    "EndpointID": "3b10c389193f7645f027b0c738ef71431cf3bf792327daa4990db138e9aafc82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-624341",
	                        "15ab9b4ad277"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25: (1.232378999s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-810782 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo crio config                                                                                                                                                                                                             │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                                                                                                                                                              │ cilium-810782             │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-347955  │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-162846 │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ delete  │ -p force-systemd-flag-162846                                                                                                                                                                                                                  │ force-systemd-flag-162846 │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341199    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	│ delete  │ -p force-systemd-env-347955                                                                                                                                                                                                                   │ force-systemd-env-347955  │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:12 UTC │
	│ start   │ -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967887       │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ cert-options-967887 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967887       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341    │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341    │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:13:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:13:33.117415  725543 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:13:33.117671  725543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:13:33.117722  725543 out.go:374] Setting ErrFile to fd 2...
	I1014 21:13:33.117744  725543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:13:33.118173  725543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:13:33.118856  725543 out.go:368] Setting JSON to false
	I1014 21:13:33.119956  725543 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14164,"bootTime":1760462250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:13:33.120055  725543 start.go:141] virtualization:  
	I1014 21:13:33.124213  725543 out.go:179] * [old-k8s-version-624341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:13:33.127676  725543 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:13:33.127758  725543 notify.go:220] Checking for updates...
	I1014 21:13:33.134039  725543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:13:33.137229  725543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:13:33.140351  725543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:13:33.143351  725543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:13:33.146284  725543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:13:33.149773  725543 config.go:182] Loaded profile config "cert-expiration-341199": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:13:33.149892  725543 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:13:33.174610  725543 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:13:33.174749  725543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:13:33.239173  725543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:13:33.229785136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:13:33.239281  725543 docker.go:318] overlay module found
	I1014 21:13:33.242290  725543 out.go:179] * Using the docker driver based on user configuration
	I1014 21:13:33.245199  725543 start.go:305] selected driver: docker
	I1014 21:13:33.245223  725543 start.go:925] validating driver "docker" against <nil>
	I1014 21:13:33.245258  725543 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:13:33.246007  725543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:13:33.309661  725543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:13:33.299883311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:13:33.309825  725543 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:13:33.310069  725543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:13:33.313148  725543 out.go:179] * Using Docker driver with root privileges
	I1014 21:13:33.315924  725543 cni.go:84] Creating CNI manager for ""
	I1014 21:13:33.315988  725543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:13:33.316000  725543 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:13:33.316079  725543 start.go:349] cluster config:
	{Name:old-k8s-version-624341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-624341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:13:33.321103  725543 out.go:179] * Starting "old-k8s-version-624341" primary control-plane node in "old-k8s-version-624341" cluster
	I1014 21:13:33.323912  725543 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:13:33.326891  725543 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:13:33.329959  725543 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 21:13:33.330047  725543 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1014 21:13:33.330058  725543 cache.go:58] Caching tarball of preloaded images
	I1014 21:13:33.330164  725543 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:13:33.330189  725543 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1014 21:13:33.330312  725543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/config.json ...
	I1014 21:13:33.330346  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/config.json: {Name:mk93de689681967c5b61851520cc61f6f5e7d73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:33.330541  725543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:13:33.349960  725543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:13:33.349982  725543 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:13:33.350008  725543 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:13:33.350030  725543 start.go:360] acquireMachinesLock for old-k8s-version-624341: {Name:mk010442ec4784dd16f6ab218cf343c9611c3a9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:13:33.350146  725543 start.go:364] duration metric: took 95.279µs to acquireMachinesLock for "old-k8s-version-624341"
	I1014 21:13:33.350177  725543 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-624341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-624341 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:13:33.350248  725543 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:13:33.353576  725543 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:13:33.353796  725543 start.go:159] libmachine.API.Create for "old-k8s-version-624341" (driver="docker")
	I1014 21:13:33.353842  725543 client.go:168] LocalClient.Create starting
	I1014 21:13:33.353913  725543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:13:33.353952  725543 main.go:141] libmachine: Decoding PEM data...
	I1014 21:13:33.353970  725543 main.go:141] libmachine: Parsing certificate...
	I1014 21:13:33.354025  725543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:13:33.354046  725543 main.go:141] libmachine: Decoding PEM data...
	I1014 21:13:33.354065  725543 main.go:141] libmachine: Parsing certificate...
	I1014 21:13:33.354428  725543 cli_runner.go:164] Run: docker network inspect old-k8s-version-624341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:13:33.372706  725543 cli_runner.go:211] docker network inspect old-k8s-version-624341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:13:33.372783  725543 network_create.go:284] running [docker network inspect old-k8s-version-624341] to gather additional debugging logs...
	I1014 21:13:33.372804  725543 cli_runner.go:164] Run: docker network inspect old-k8s-version-624341
	W1014 21:13:33.388868  725543 cli_runner.go:211] docker network inspect old-k8s-version-624341 returned with exit code 1
	I1014 21:13:33.388903  725543 network_create.go:287] error running [docker network inspect old-k8s-version-624341]: docker network inspect old-k8s-version-624341: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-624341 not found
	I1014 21:13:33.388918  725543 network_create.go:289] output of [docker network inspect old-k8s-version-624341]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-624341 not found
	
	** /stderr **
	I1014 21:13:33.389031  725543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:13:33.405360  725543 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:13:33.405725  725543 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:13:33.406109  725543 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:13:33.406390  725543 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d00615450e54 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:bc:cc:3b:8e:06} reservation:<nil>}
	I1014 21:13:33.406933  725543 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e1130}
	I1014 21:13:33.406957  725543 network_create.go:124] attempt to create docker network old-k8s-version-624341 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:13:33.407018  725543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-624341 old-k8s-version-624341
	I1014 21:13:33.469317  725543 network_create.go:108] docker network old-k8s-version-624341 192.168.85.0/24 created
	I1014 21:13:33.469354  725543 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-624341" container
	I1014 21:13:33.469445  725543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:13:33.485894  725543 cli_runner.go:164] Run: docker volume create old-k8s-version-624341 --label name.minikube.sigs.k8s.io=old-k8s-version-624341 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:13:33.503774  725543 oci.go:103] Successfully created a docker volume old-k8s-version-624341
	I1014 21:13:33.503870  725543 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-624341-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-624341 --entrypoint /usr/bin/test -v old-k8s-version-624341:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:13:34.062096  725543 oci.go:107] Successfully prepared a docker volume old-k8s-version-624341
	I1014 21:13:34.062144  725543 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 21:13:34.062173  725543 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:13:34.062261  725543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-624341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:13:39.095161  725543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-624341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.032861019s)
	I1014 21:13:39.095192  725543 kic.go:203] duration metric: took 5.033016672s to extract preloaded images to volume ...
	W1014 21:13:39.095347  725543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:13:39.095466  725543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:13:39.146876  725543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-624341 --name old-k8s-version-624341 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-624341 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-624341 --network old-k8s-version-624341 --ip 192.168.85.2 --volume old-k8s-version-624341:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:13:39.465682  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Running}}
	I1014 21:13:39.490387  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:13:39.514526  725543 cli_runner.go:164] Run: docker exec old-k8s-version-624341 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:13:39.569765  725543 oci.go:144] the created container "old-k8s-version-624341" has a running status.
	I1014 21:13:39.569794  725543 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa...
	I1014 21:13:41.777731  725543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:13:41.798056  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:13:41.816965  725543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:13:41.816987  725543 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-624341 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:13:41.856963  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:13:41.873399  725543 machine.go:93] provisionDockerMachine start ...
	I1014 21:13:41.873501  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:41.890186  725543 main.go:141] libmachine: Using SSH client type: native
	I1014 21:13:41.890534  725543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1014 21:13:41.890550  725543 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:13:42.036280  725543 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-624341
	
	I1014 21:13:42.036308  725543 ubuntu.go:182] provisioning hostname "old-k8s-version-624341"
	I1014 21:13:42.036424  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:42.055164  725543 main.go:141] libmachine: Using SSH client type: native
	I1014 21:13:42.055489  725543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1014 21:13:42.055509  725543 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-624341 && echo "old-k8s-version-624341" | sudo tee /etc/hostname
	I1014 21:13:42.222671  725543 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-624341
	
	I1014 21:13:42.222767  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:42.242272  725543 main.go:141] libmachine: Using SSH client type: native
	I1014 21:13:42.242601  725543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1014 21:13:42.242642  725543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-624341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-624341/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-624341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:13:42.392754  725543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:13:42.392783  725543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:13:42.392806  725543 ubuntu.go:190] setting up certificates
	I1014 21:13:42.392815  725543 provision.go:84] configureAuth start
	I1014 21:13:42.392889  725543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-624341
	I1014 21:13:42.409043  725543 provision.go:143] copyHostCerts
	I1014 21:13:42.409110  725543 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:13:42.409122  725543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:13:42.409201  725543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:13:42.409326  725543 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:13:42.409338  725543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:13:42.409365  725543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:13:42.409424  725543 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:13:42.409433  725543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:13:42.409456  725543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:13:42.409506  725543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-624341 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-624341]
	I1014 21:13:42.807337  725543 provision.go:177] copyRemoteCerts
	I1014 21:13:42.807412  725543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:13:42.807453  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:42.827061  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:13:42.932539  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:13:42.951281  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 21:13:42.968549  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:13:42.987652  725543 provision.go:87] duration metric: took 594.819636ms to configureAuth
	I1014 21:13:42.987678  725543 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:13:42.987864  725543 config.go:182] Loaded profile config "old-k8s-version-624341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1014 21:13:42.987977  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:43.007749  725543 main.go:141] libmachine: Using SSH client type: native
	I1014 21:13:43.008075  725543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1014 21:13:43.008092  725543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:13:43.278151  725543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:13:43.278172  725543 machine.go:96] duration metric: took 1.404747001s to provisionDockerMachine
	I1014 21:13:43.278182  725543 client.go:171] duration metric: took 9.924329055s to LocalClient.Create
	I1014 21:13:43.278200  725543 start.go:167] duration metric: took 9.92440452s to libmachine.API.Create "old-k8s-version-624341"
	I1014 21:13:43.278207  725543 start.go:293] postStartSetup for "old-k8s-version-624341" (driver="docker")
	I1014 21:13:43.278217  725543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:13:43.278287  725543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:13:43.278333  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:43.296141  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:13:43.401406  725543 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:13:43.405086  725543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:13:43.405115  725543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:13:43.405127  725543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:13:43.405184  725543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:13:43.405267  725543 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:13:43.405368  725543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:13:43.412846  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:13:43.431022  725543 start.go:296] duration metric: took 152.800211ms for postStartSetup
	I1014 21:13:43.431417  725543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-624341
	I1014 21:13:43.450303  725543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/config.json ...
	I1014 21:13:43.450604  725543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:13:43.450706  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:43.468716  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:13:43.569729  725543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:13:43.574529  725543 start.go:128] duration metric: took 10.224263594s to createHost
	I1014 21:13:43.574556  725543 start.go:83] releasing machines lock for "old-k8s-version-624341", held for 10.224397257s
	I1014 21:13:43.574651  725543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-624341
	I1014 21:13:43.591879  725543 ssh_runner.go:195] Run: cat /version.json
	I1014 21:13:43.591941  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:43.592207  725543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:13:43.592277  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:13:43.613800  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:13:43.620437  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:13:43.803909  725543 ssh_runner.go:195] Run: systemctl --version
	I1014 21:13:43.810374  725543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:13:43.847014  725543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:13:43.851399  725543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:13:43.851469  725543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:13:43.879714  725543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:13:43.879777  725543 start.go:495] detecting cgroup driver to use...
	I1014 21:13:43.879827  725543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:13:43.879893  725543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:13:43.897822  725543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:13:43.911947  725543 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:13:43.912013  725543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:13:43.929571  725543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:13:43.948560  725543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:13:44.064024  725543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:13:44.189734  725543 docker.go:234] disabling docker service ...
	I1014 21:13:44.189840  725543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:13:44.210697  725543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:13:44.228066  725543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:13:44.347554  725543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:13:44.474290  725543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:13:44.488457  725543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:13:44.502180  725543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1014 21:13:44.502272  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.519745  725543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:13:44.519836  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.532983  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.543011  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.552660  725543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:13:44.561415  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.570649  725543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.587680  725543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:13:44.596983  725543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:13:44.604641  725543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:13:44.612223  725543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:13:44.717519  725543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:13:44.845551  725543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:13:44.845637  725543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:13:44.849307  725543 start.go:563] Will wait 60s for crictl version
	I1014 21:13:44.849392  725543 ssh_runner.go:195] Run: which crictl
	I1014 21:13:44.852741  725543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:13:44.884531  725543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:13:44.884660  725543 ssh_runner.go:195] Run: crio --version
	I1014 21:13:44.913530  725543 ssh_runner.go:195] Run: crio --version
	I1014 21:13:44.948669  725543 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1014 21:13:44.951250  725543 cli_runner.go:164] Run: docker network inspect old-k8s-version-624341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:13:44.971199  725543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:13:44.976386  725543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:13:44.986403  725543 kubeadm.go:883] updating cluster {Name:old-k8s-version-624341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-624341 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:13:44.986513  725543 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 21:13:44.986572  725543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:13:45.048721  725543 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:13:45.048743  725543 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:13:45.048816  725543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:13:45.091236  725543 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:13:45.091269  725543 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:13:45.091278  725543 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1014 21:13:45.091377  725543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-624341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-624341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:13:45.091478  725543 ssh_runner.go:195] Run: crio config
	I1014 21:13:45.161520  725543 cni.go:84] Creating CNI manager for ""
	I1014 21:13:45.161553  725543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:13:45.161584  725543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:13:45.161622  725543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-624341 NodeName:old-k8s-version-624341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:13:45.162291  725543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-624341"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:13:45.162410  725543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1014 21:13:45.173212  725543 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:13:45.173309  725543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:13:45.183846  725543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1014 21:13:45.201407  725543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:13:45.219330  725543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1014 21:13:45.237602  725543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:13:45.242162  725543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:13:45.261973  725543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:13:45.421856  725543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:13:45.440973  725543 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341 for IP: 192.168.85.2
	I1014 21:13:45.440997  725543 certs.go:195] generating shared ca certs ...
	I1014 21:13:45.441017  725543 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:45.441187  725543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:13:45.441235  725543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:13:45.441248  725543 certs.go:257] generating profile certs ...
	I1014 21:13:45.441318  725543 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.key
	I1014 21:13:45.441343  725543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt with IP's: []
	I1014 21:13:45.853194  725543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt ...
	I1014 21:13:45.853226  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: {Name:mk61287959adf60d02e3391cf705ca953a2fc8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:45.853431  725543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.key ...
	I1014 21:13:45.853448  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.key: {Name:mke9a0aae47b7edba6f89d724f52ee13626f5931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:45.853540  725543 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key.e52a7f6f
	I1014 21:13:45.853562  725543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt.e52a7f6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:13:46.100182  725543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt.e52a7f6f ...
	I1014 21:13:46.100216  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt.e52a7f6f: {Name:mk05c0894fbcfd71b194cd49ef9495ee5852e1ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:46.100417  725543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key.e52a7f6f ...
	I1014 21:13:46.100435  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key.e52a7f6f: {Name:mk913d48f6f0b7f1fbf0a6a922c528f47fac85d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:46.100519  725543 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt.e52a7f6f -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt
	I1014 21:13:46.100596  725543 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key.e52a7f6f -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key
	I1014 21:13:46.100658  725543 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.key
	I1014 21:13:46.100676  725543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.crt with IP's: []
	I1014 21:13:46.558222  725543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.crt ...
	I1014 21:13:46.558260  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.crt: {Name:mke54e268baafaceb89620a315ddef2871aeb58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:46.558463  725543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.key ...
	I1014 21:13:46.558481  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.key: {Name:mk5e1520fd7f54f8e2724838cc9ad20bb616bcf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:13:46.558687  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:13:46.558729  725543 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:13:46.558743  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:13:46.558769  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:13:46.558796  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:13:46.558824  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:13:46.558872  725543 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:13:46.559431  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:13:46.578885  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:13:46.600759  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:13:46.619169  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:13:46.637444  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 21:13:46.655436  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:13:46.674436  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:13:46.692875  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:13:46.712747  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:13:46.733150  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:13:46.751701  725543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:13:46.773290  725543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:13:46.787938  725543 ssh_runner.go:195] Run: openssl version
	I1014 21:13:46.795341  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:13:46.804621  725543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:13:46.808783  725543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:13:46.808850  725543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:13:46.850544  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:13:46.859938  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:13:46.868862  725543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:13:46.873141  725543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:13:46.873233  725543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:13:46.914640  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:13:46.923034  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:13:46.932351  725543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:13:46.936169  725543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:13:46.936247  725543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:13:46.983437  725543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:13:46.992362  725543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:13:46.995935  725543 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:13:46.995991  725543 kubeadm.go:400] StartCluster: {Name:old-k8s-version-624341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-624341 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:13:46.996096  725543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:13:46.996163  725543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:13:47.025390  725543 cri.go:89] found id: ""
	I1014 21:13:47.025473  725543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:13:47.034436  725543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:13:47.043639  725543 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:13:47.043720  725543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:13:47.054412  725543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:13:47.054480  725543 kubeadm.go:157] found existing configuration files:
	
	I1014 21:13:47.054562  725543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:13:47.063315  725543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:13:47.063459  725543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:13:47.072460  725543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:13:47.081844  725543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:13:47.081965  725543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:13:47.090912  725543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:13:47.099926  725543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:13:47.100043  725543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:13:47.108242  725543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:13:47.117318  725543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:13:47.117435  725543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:13:47.128435  725543 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:13:47.178259  725543 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1014 21:13:47.178416  725543 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:13:47.222741  725543 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:13:47.222823  725543 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:13:47.222871  725543 kubeadm.go:318] OS: Linux
	I1014 21:13:47.222923  725543 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:13:47.222978  725543 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:13:47.223030  725543 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:13:47.223092  725543 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:13:47.223148  725543 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:13:47.223211  725543 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:13:47.223263  725543 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:13:47.223357  725543 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:13:47.223420  725543 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:13:47.313744  725543 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:13:47.313997  725543 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:13:47.314153  725543 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 21:13:47.485830  725543 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:13:47.491016  725543 out.go:252]   - Generating certificates and keys ...
	I1014 21:13:47.491126  725543 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:13:47.491210  725543 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:13:47.681199  725543 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:13:47.888773  725543 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:13:48.391883  725543 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:13:48.674119  725543 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:13:49.254130  725543 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:13:49.254967  725543 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-624341] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:13:49.558885  725543 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:13:49.559322  725543 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-624341] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:13:49.837237  725543 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:13:50.175193  725543 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:13:51.130538  725543 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:13:51.130871  725543 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:13:51.491163  725543 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:13:51.707144  725543 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:13:52.387536  725543 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:13:52.673324  725543 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:13:52.674116  725543 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:13:52.678440  725543 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:13:52.683823  725543 out.go:252]   - Booting up control plane ...
	I1014 21:13:52.683928  725543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:13:52.684010  725543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:13:52.684079  725543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:13:52.699431  725543 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:13:52.699536  725543 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:13:52.699578  725543 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:13:52.832944  725543 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 21:14:00.835503  725543 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.003297 seconds
	I1014 21:14:00.835667  725543 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:14:00.854119  725543 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:14:01.392241  725543 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:14:01.392502  725543 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-624341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:14:01.905783  725543 kubeadm.go:318] [bootstrap-token] Using token: p00yoy.fkwen6p8ehytkv3b
	I1014 21:14:01.908646  725543 out.go:252]   - Configuring RBAC rules ...
	I1014 21:14:01.908805  725543 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:14:01.916558  725543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:14:01.927182  725543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:14:01.931931  725543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:14:01.938903  725543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:14:01.950260  725543 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:14:01.966660  725543 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:14:02.270095  725543 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:14:02.325526  725543 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:14:02.327028  725543 kubeadm.go:318] 
	I1014 21:14:02.327107  725543 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:14:02.327114  725543 kubeadm.go:318] 
	I1014 21:14:02.327200  725543 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:14:02.327205  725543 kubeadm.go:318] 
	I1014 21:14:02.327232  725543 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:14:02.327584  725543 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:14:02.327661  725543 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:14:02.327672  725543 kubeadm.go:318] 
	I1014 21:14:02.327730  725543 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:14:02.327739  725543 kubeadm.go:318] 
	I1014 21:14:02.327805  725543 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:14:02.327841  725543 kubeadm.go:318] 
	I1014 21:14:02.327904  725543 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:14:02.328012  725543 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:14:02.328101  725543 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:14:02.328120  725543 kubeadm.go:318] 
	I1014 21:14:02.328211  725543 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:14:02.328328  725543 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:14:02.328340  725543 kubeadm.go:318] 
	I1014 21:14:02.330154  725543 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p00yoy.fkwen6p8ehytkv3b \
	I1014 21:14:02.330278  725543 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:14:02.330423  725543 kubeadm.go:318] 	--control-plane 
	I1014 21:14:02.330436  725543 kubeadm.go:318] 
	I1014 21:14:02.330542  725543 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:14:02.330548  725543 kubeadm.go:318] 
	I1014 21:14:02.330650  725543 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p00yoy.fkwen6p8ehytkv3b \
	I1014 21:14:02.330759  725543 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:14:02.336465  725543 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:14:02.336589  725543 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:14:02.336607  725543 cni.go:84] Creating CNI manager for ""
	I1014 21:14:02.336614  725543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:14:02.339784  725543 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:14:02.342762  725543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:14:02.347804  725543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1014 21:14:02.347824  725543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:14:02.370328  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 21:14:03.351136  725543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:14:03.351280  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:03.351354  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-624341 minikube.k8s.io/updated_at=2025_10_14T21_14_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=old-k8s-version-624341 minikube.k8s.io/primary=true
	I1014 21:14:03.497491  725543 ops.go:34] apiserver oom_adj: -16
	I1014 21:14:03.497612  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:03.998369  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:04.498460  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:04.997958  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:05.498621  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:05.997736  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:06.498460  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:06.997709  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:07.498294  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:07.997751  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:08.498201  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:08.997791  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:09.497965  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:09.997796  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:10.498623  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:10.998252  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:11.497670  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:11.997749  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:12.497960  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:12.998364  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:13.498698  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:13.997765  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:14.498606  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:14.998449  725543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:14:15.161493  725543 kubeadm.go:1113] duration metric: took 11.810240625s to wait for elevateKubeSystemPrivileges
	I1014 21:14:15.161521  725543 kubeadm.go:402] duration metric: took 28.165533778s to StartCluster
	I1014 21:14:15.161540  725543 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:14:15.161602  725543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:14:15.162612  725543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:14:15.162837  725543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:14:15.162997  725543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:14:15.163263  725543 config.go:182] Loaded profile config "old-k8s-version-624341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1014 21:14:15.163374  725543 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:14:15.163438  725543 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-624341"
	I1014 21:14:15.163463  725543 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-624341"
	I1014 21:14:15.163486  725543 host.go:66] Checking if "old-k8s-version-624341" exists ...
	I1014 21:14:15.164010  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:14:15.164489  725543 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-624341"
	I1014 21:14:15.164524  725543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-624341"
	I1014 21:14:15.164795  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:14:15.167176  725543 out.go:179] * Verifying Kubernetes components...
	I1014 21:14:15.171284  725543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:14:15.199490  725543 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-624341"
	I1014 21:14:15.199543  725543 host.go:66] Checking if "old-k8s-version-624341" exists ...
	I1014 21:14:15.200068  725543 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:14:15.220805  725543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:14:15.225880  725543 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:14:15.225899  725543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:14:15.225967  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:14:15.238801  725543 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:14:15.238825  725543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:14:15.238893  725543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:14:15.302859  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:14:15.306852  725543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:14:15.470836  725543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:14:15.470988  725543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:14:15.513667  725543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:14:15.656862  725543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:14:16.319205  725543 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-624341" to be "Ready" ...
	I1014 21:14:16.319519  725543 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1014 21:14:16.624409  725543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.110688401s)
	I1014 21:14:16.655885  725543 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 21:14:16.658768  725543 addons.go:514] duration metric: took 1.49537625s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 21:14:16.824650  725543 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-624341" context rescaled to 1 replicas
	W1014 21:14:18.323357  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	W1014 21:14:20.822222  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	W1014 21:14:23.322679  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	W1014 21:14:25.323050  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	W1014 21:14:27.822469  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	W1014 21:14:29.823090  725543 node_ready.go:57] node "old-k8s-version-624341" has "Ready":"False" status (will retry)
	I1014 21:14:30.331378  725543 node_ready.go:49] node "old-k8s-version-624341" is "Ready"
	I1014 21:14:30.331403  725543 node_ready.go:38] duration metric: took 14.012173759s for node "old-k8s-version-624341" to be "Ready" ...
	I1014 21:14:30.331416  725543 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:14:30.331471  725543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:14:30.359306  725543 api_server.go:72] duration metric: took 15.196441419s to wait for apiserver process to appear ...
	I1014 21:14:30.359341  725543 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:14:30.359360  725543 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:14:30.370001  725543 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:14:30.371453  725543 api_server.go:141] control plane version: v1.28.0
	I1014 21:14:30.371489  725543 api_server.go:131] duration metric: took 12.141381ms to wait for apiserver health ...
	I1014 21:14:30.371498  725543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:14:30.377518  725543 system_pods.go:59] 8 kube-system pods found
	I1014 21:14:30.377549  725543 system_pods.go:61] "coredns-5dd5756b68-cqt59" [dbce2894-73ff-4d99-8254-2f66d6848f2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:14:30.377556  725543 system_pods.go:61] "etcd-old-k8s-version-624341" [45e7b903-c122-4d76-8005-ea964fa235f2] Running
	I1014 21:14:30.377562  725543 system_pods.go:61] "kindnet-5ffrd" [18cf142c-6ae3-4a6e-83fa-4fc72518a4fd] Running
	I1014 21:14:30.377567  725543 system_pods.go:61] "kube-apiserver-old-k8s-version-624341" [33b0e673-a016-4c1b-9606-d0a24da55418] Running
	I1014 21:14:30.377572  725543 system_pods.go:61] "kube-controller-manager-old-k8s-version-624341" [3bef9b9d-3562-4d8e-a838-9dcca6ebf4b9] Running
	I1014 21:14:30.377576  725543 system_pods.go:61] "kube-proxy-v2k8v" [cfdb0de4-da34-4c94-9144-7d5457893dcb] Running
	I1014 21:14:30.377580  725543 system_pods.go:61] "kube-scheduler-old-k8s-version-624341" [cb11a602-e587-4384-816b-f5fed1c14a7e] Running
	I1014 21:14:30.377587  725543 system_pods.go:61] "storage-provisioner" [bd79c7b1-86f7-4049-bffa-08e050f93c5a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:14:30.377592  725543 system_pods.go:74] duration metric: took 6.088669ms to wait for pod list to return data ...
	I1014 21:14:30.377601  725543 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:14:30.381329  725543 default_sa.go:45] found service account: "default"
	I1014 21:14:30.381429  725543 default_sa.go:55] duration metric: took 3.822314ms for default service account to be created ...
	I1014 21:14:30.381453  725543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:14:30.394407  725543 system_pods.go:86] 8 kube-system pods found
	I1014 21:14:30.394436  725543 system_pods.go:89] "coredns-5dd5756b68-cqt59" [dbce2894-73ff-4d99-8254-2f66d6848f2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:14:30.394442  725543 system_pods.go:89] "etcd-old-k8s-version-624341" [45e7b903-c122-4d76-8005-ea964fa235f2] Running
	I1014 21:14:30.394448  725543 system_pods.go:89] "kindnet-5ffrd" [18cf142c-6ae3-4a6e-83fa-4fc72518a4fd] Running
	I1014 21:14:30.394452  725543 system_pods.go:89] "kube-apiserver-old-k8s-version-624341" [33b0e673-a016-4c1b-9606-d0a24da55418] Running
	I1014 21:14:30.394457  725543 system_pods.go:89] "kube-controller-manager-old-k8s-version-624341" [3bef9b9d-3562-4d8e-a838-9dcca6ebf4b9] Running
	I1014 21:14:30.394461  725543 system_pods.go:89] "kube-proxy-v2k8v" [cfdb0de4-da34-4c94-9144-7d5457893dcb] Running
	I1014 21:14:30.394466  725543 system_pods.go:89] "kube-scheduler-old-k8s-version-624341" [cb11a602-e587-4384-816b-f5fed1c14a7e] Running
	I1014 21:14:30.394472  725543 system_pods.go:89] "storage-provisioner" [bd79c7b1-86f7-4049-bffa-08e050f93c5a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:14:30.394492  725543 retry.go:31] will retry after 280.01696ms: missing components: kube-dns
	I1014 21:14:30.679180  725543 system_pods.go:86] 8 kube-system pods found
	I1014 21:14:30.679228  725543 system_pods.go:89] "coredns-5dd5756b68-cqt59" [dbce2894-73ff-4d99-8254-2f66d6848f2d] Running
	I1014 21:14:30.679236  725543 system_pods.go:89] "etcd-old-k8s-version-624341" [45e7b903-c122-4d76-8005-ea964fa235f2] Running
	I1014 21:14:30.679240  725543 system_pods.go:89] "kindnet-5ffrd" [18cf142c-6ae3-4a6e-83fa-4fc72518a4fd] Running
	I1014 21:14:30.679244  725543 system_pods.go:89] "kube-apiserver-old-k8s-version-624341" [33b0e673-a016-4c1b-9606-d0a24da55418] Running
	I1014 21:14:30.679249  725543 system_pods.go:89] "kube-controller-manager-old-k8s-version-624341" [3bef9b9d-3562-4d8e-a838-9dcca6ebf4b9] Running
	I1014 21:14:30.679253  725543 system_pods.go:89] "kube-proxy-v2k8v" [cfdb0de4-da34-4c94-9144-7d5457893dcb] Running
	I1014 21:14:30.679258  725543 system_pods.go:89] "kube-scheduler-old-k8s-version-624341" [cb11a602-e587-4384-816b-f5fed1c14a7e] Running
	I1014 21:14:30.679262  725543 system_pods.go:89] "storage-provisioner" [bd79c7b1-86f7-4049-bffa-08e050f93c5a] Running
	I1014 21:14:30.679270  725543 system_pods.go:126] duration metric: took 297.798578ms to wait for k8s-apps to be running ...
	I1014 21:14:30.679283  725543 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:14:30.679351  725543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:14:30.694445  725543 system_svc.go:56] duration metric: took 15.151359ms WaitForService to wait for kubelet
	I1014 21:14:30.694471  725543 kubeadm.go:586] duration metric: took 15.531612625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:14:30.694491  725543 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:14:30.701224  725543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:14:30.701261  725543 node_conditions.go:123] node cpu capacity is 2
	I1014 21:14:30.701275  725543 node_conditions.go:105] duration metric: took 6.778483ms to run NodePressure ...
	I1014 21:14:30.701287  725543 start.go:241] waiting for startup goroutines ...
	I1014 21:14:30.701295  725543 start.go:246] waiting for cluster config update ...
	I1014 21:14:30.701306  725543 start.go:255] writing updated cluster config ...
	I1014 21:14:30.701579  725543 ssh_runner.go:195] Run: rm -f paused
	I1014 21:14:30.705494  725543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:14:30.710410  725543 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cqt59" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.715789  725543 pod_ready.go:94] pod "coredns-5dd5756b68-cqt59" is "Ready"
	I1014 21:14:30.715821  725543 pod_ready.go:86] duration metric: took 5.384445ms for pod "coredns-5dd5756b68-cqt59" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.719304  725543 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.724419  725543 pod_ready.go:94] pod "etcd-old-k8s-version-624341" is "Ready"
	I1014 21:14:30.724450  725543 pod_ready.go:86] duration metric: took 5.121237ms for pod "etcd-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.727796  725543 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.733164  725543 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-624341" is "Ready"
	I1014 21:14:30.733193  725543 pod_ready.go:86] duration metric: took 5.37473ms for pod "kube-apiserver-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:30.736575  725543 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:31.110468  725543 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-624341" is "Ready"
	I1014 21:14:31.110498  725543 pod_ready.go:86] duration metric: took 373.898241ms for pod "kube-controller-manager-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:31.310326  725543 pod_ready.go:83] waiting for pod "kube-proxy-v2k8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:31.710294  725543 pod_ready.go:94] pod "kube-proxy-v2k8v" is "Ready"
	I1014 21:14:31.710369  725543 pod_ready.go:86] duration metric: took 400.014842ms for pod "kube-proxy-v2k8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:31.910028  725543 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:32.310112  725543 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-624341" is "Ready"
	I1014 21:14:32.310142  725543 pod_ready.go:86] duration metric: took 400.087491ms for pod "kube-scheduler-old-k8s-version-624341" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:14:32.310154  725543 pod_ready.go:40] duration metric: took 1.604628089s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:14:32.373919  725543 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1014 21:14:32.377470  725543 out.go:203] 
	W1014 21:14:32.380468  725543 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1014 21:14:32.383400  725543 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1014 21:14:32.386311  725543 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-624341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 21:14:30 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:30.334539984Z" level=info msg="Created container 13c84d1f610f018b89f5b4e813c7344c0620913b54a39bbc79eac16c6a09a048: kube-system/coredns-5dd5756b68-cqt59/coredns" id=34d9a355-be78-4b6c-b10d-4d6295994655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:14:30 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:30.335489043Z" level=info msg="Starting container: 13c84d1f610f018b89f5b4e813c7344c0620913b54a39bbc79eac16c6a09a048" id=c7ba2768-6d47-4aa6-9508-6920cc182223 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:14:30 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:30.339334964Z" level=info msg="Started container" PID=1930 containerID=13c84d1f610f018b89f5b4e813c7344c0620913b54a39bbc79eac16c6a09a048 description=kube-system/coredns-5dd5756b68-cqt59/coredns id=c7ba2768-6d47-4aa6-9508-6920cc182223 name=/runtime.v1.RuntimeService/StartContainer sandboxID=062fde2e369a3f4d9f9c997028502966d425cf2d20ed7bb9a3b9a039fa2181cb
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.91565112Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d1be4a51-8a57-4034-b313-d1888fb6cef6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.915780828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.924588435Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4 UID:ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192 NetNS:/var/run/netns/afaf3f8f-6993-46eb-928e-e1abbb0dfd5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079580}] Aliases:map[]}"
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.924628091Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.94237024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4 UID:ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192 NetNS:/var/run/netns/afaf3f8f-6993-46eb-928e-e1abbb0dfd5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079580}] Aliases:map[]}"
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.942523448Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.947451313Z" level=info msg="Ran pod sandbox e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4 with infra container: default/busybox/POD" id=d1be4a51-8a57-4034-b313-d1888fb6cef6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.948666714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e04cb92a-a92c-4223-8557-d985754c2099 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.948819356Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e04cb92a-a92c-4223-8557-d985754c2099 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.9488657Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e04cb92a-a92c-4223-8557-d985754c2099 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.949463993Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=febcd734-7c69-4e71-b744-9c418d5f5971 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:14:32 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:32.951938736Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.94947212Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=febcd734-7c69-4e71-b744-9c418d5f5971 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.955008895Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=735aee9a-1157-47af-8c44-75ae57d6f951 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.957541501Z" level=info msg="Creating container: default/busybox/busybox" id=f5ef04b3-c482-47b3-84b2-07403a260a64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.958336204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.964712064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.96533654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.981439511Z" level=info msg="Created container ca1e9f789b082eaf9b6d5532457a1ee41b49490c666b1f5254cfd40cec153974: default/busybox/busybox" id=f5ef04b3-c482-47b3-84b2-07403a260a64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.982330904Z" level=info msg="Starting container: ca1e9f789b082eaf9b6d5532457a1ee41b49490c666b1f5254cfd40cec153974" id=5c65c96d-d688-4705-ae8f-af41def507ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:14:34 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:34.984842538Z" level=info msg="Started container" PID=1985 containerID=ca1e9f789b082eaf9b6d5532457a1ee41b49490c666b1f5254cfd40cec153974 description=default/busybox/busybox id=5c65c96d-d688-4705-ae8f-af41def507ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4
	Oct 14 21:14:41 old-k8s-version-624341 crio[839]: time="2025-10-14T21:14:41.731315056Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ca1e9f789b082       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   e40ec2fb41aa2       busybox                                          default
	13c84d1f610f0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   062fde2e369a3       coredns-5dd5756b68-cqt59                         kube-system
	a66c80693f622       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   dea2c9a9a0ea7       storage-provisioner                              kube-system
	bdc7e2f781971       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   1920c0fcfe9cd       kindnet-5ffrd                                    kube-system
	c02a42027bdb9       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   77c593c177df6       kube-proxy-v2k8v                                 kube-system
	866f7f6d042b3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   f223bd018e455       kube-scheduler-old-k8s-version-624341            kube-system
	7b02502ef7a0a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   216c5d8d7ac47       kube-apiserver-old-k8s-version-624341            kube-system
	4961032872a92       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   8d701273f7c6e       etcd-old-k8s-version-624341                      kube-system
	5fb5c9a2cd132       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   6c717ab64cc33       kube-controller-manager-old-k8s-version-624341   kube-system
	
	
	==> coredns [13c84d1f610f018b89f5b4e813c7344c0620913b54a39bbc79eac16c6a09a048] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57455 - 61012 "HINFO IN 6111742442937099041.2789209920555275944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024105078s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-624341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-624341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=old-k8s-version-624341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_14_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:13:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-624341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:14:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:14:33 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:14:33 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:14:33 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:14:33 +0000   Tue, 14 Oct 2025 21:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-624341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 65971ecb2dcb4d78b2ecbbc032eb7f11
	  System UUID:                32ff5aeb-1d61-4167-8fd8-805f888e4635
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-cqt59                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-624341                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-5ffrd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-624341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-624341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-v2k8v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-624341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-624341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-624341 event: Registered Node old-k8s-version-624341 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-624341 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 20:36] overlayfs: idmapped layers are currently not supported
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4961032872a92d81d7102a5dc797e8ca51847162f04d8a87b41b34cff8bfd539] <==
	{"level":"info","ts":"2025-10-14T21:13:54.050495Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:13:54.050624Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:13:54.049592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-14T21:13:54.049674Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:13:54.050352Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-14T21:13:54.051004Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-14T21:13:54.051137Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:13:54.516347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-14T21:13:54.516466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-14T21:13:54.516508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-14T21:13:54.516559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-14T21:13:54.516598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-14T21:13:54.516646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-14T21:13:54.51668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-14T21:13:54.520467Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:13:54.524065Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-624341 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-14T21:13:54.524146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:13:54.525362Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:13:54.525491Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:13:54.525947Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:13:54.525563Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T21:13:54.525653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:13:54.52653Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T21:13:54.526573Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-14T21:13:54.546992Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 21:14:43 up  3:57,  0 user,  load average: 2.24, 1.49, 1.59
	Linux old-k8s-version-624341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdc7e2f78197169ca2f3083f02ed964705b0b96c3e88897eaad3a3ac8f1c4f3a] <==
	I1014 21:14:19.174118       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:14:19.174380       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:14:19.174525       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:14:19.174542       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:14:19.174558       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:14:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:14:19.374642       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:14:19.374719       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:14:19.374753       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:14:19.375813       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1014 21:14:19.575218       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:14:19.575244       1 metrics.go:72] Registering metrics
	I1014 21:14:19.575309       1 controller.go:711] "Syncing nftables rules"
	I1014 21:14:29.380112       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:14:29.380168       1 main.go:301] handling current node
	I1014 21:14:39.377478       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:14:39.377512       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b02502ef7a0a2fce0b6859f50d00df65147f8a2d9fb5251083f643cfa1c8f73] <==
	I1014 21:13:58.902436       1 aggregator.go:166] initial CRD sync complete...
	I1014 21:13:58.902493       1 autoregister_controller.go:141] Starting autoregister controller
	I1014 21:13:58.903079       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 21:13:58.903131       1 cache.go:39] Caches are synced for autoregister controller
	I1014 21:13:58.903176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:13:58.908523       1 controller.go:624] quota admission added evaluator for: namespaces
	I1014 21:13:58.911531       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1014 21:13:58.913662       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1014 21:13:58.918914       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 21:13:59.122828       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:13:59.505469       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 21:13:59.510863       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 21:13:59.510885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:14:00.407622       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:14:00.476611       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:14:00.538461       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 21:14:00.545853       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1014 21:14:00.547035       1 controller.go:624] quota admission added evaluator for: endpoints
	I1014 21:14:00.558268       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:14:00.867378       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1014 21:14:02.243966       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1014 21:14:02.268725       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 21:14:02.282041       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1014 21:14:15.588759       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1014 21:14:15.984551       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5fb5c9a2cd132b5dc210df998df070391f4067a91c8db0361561652d1086c628] <==
	I1014 21:14:15.224394       1 shared_informer.go:318] Caches are synced for disruption
	I1014 21:14:15.226155       1 shared_informer.go:318] Caches are synced for cronjob
	I1014 21:14:15.286228       1 shared_informer.go:318] Caches are synced for resource quota
	I1014 21:14:15.286405       1 shared_informer.go:318] Caches are synced for resource quota
	I1014 21:14:15.599855       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1014 21:14:15.660876       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:14:15.679158       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:14:15.679200       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1014 21:14:16.008540       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v2k8v"
	I1014 21:14:16.019374       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5ffrd"
	I1014 21:14:16.127783       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v8gvx"
	I1014 21:14:16.151621       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cqt59"
	I1014 21:14:16.197157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="598.167273ms"
	I1014 21:14:16.217533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.31318ms"
	I1014 21:14:16.217719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.562µs"
	I1014 21:14:16.384407       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1014 21:14:16.445115       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-v8gvx"
	I1014 21:14:16.462980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.780948ms"
	I1014 21:14:16.485812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.788391ms"
	I1014 21:14:16.486312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.632µs"
	I1014 21:14:29.943767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.977µs"
	I1014 21:14:29.971650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.252µs"
	I1014 21:14:30.078620       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1014 21:14:30.630417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.305589ms"
	I1014 21:14:30.630736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.973µs"
	
	
	==> kube-proxy [c02a42027bdb9c7cd7a3b47e44eee10696e85169161c94f22bc7c2da5be7f7cc] <==
	I1014 21:14:16.578978       1 server_others.go:69] "Using iptables proxy"
	I1014 21:14:16.680360       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1014 21:14:16.703109       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:14:16.706482       1 server_others.go:152] "Using iptables Proxier"
	I1014 21:14:16.706568       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1014 21:14:16.706609       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1014 21:14:16.706670       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1014 21:14:16.707643       1 server.go:846] "Version info" version="v1.28.0"
	I1014 21:14:16.707694       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:14:16.708740       1 config.go:188] "Starting service config controller"
	I1014 21:14:16.708816       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1014 21:14:16.708861       1 config.go:97] "Starting endpoint slice config controller"
	I1014 21:14:16.708899       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1014 21:14:16.709386       1 config.go:315] "Starting node config controller"
	I1014 21:14:16.710191       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1014 21:14:16.809301       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1014 21:14:16.809344       1 shared_informer.go:318] Caches are synced for service config
	I1014 21:14:16.810674       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [866f7f6d042b316161f43481b5c30cfd790bf807c243e20a840a51066e3e41f9] <==
	W1014 21:13:58.912567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 21:13:58.913355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1014 21:13:58.912601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 21:13:58.913435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1014 21:13:58.912633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 21:13:58.913510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1014 21:13:58.912667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 21:13:58.913594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1014 21:13:58.912701       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 21:13:58.913671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1014 21:13:58.912739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 21:13:58.913759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1014 21:13:58.912773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1014 21:13:58.912810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1014 21:13:58.912846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 21:13:58.913877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 21:13:58.913922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 21:13:58.913960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1014 21:13:59.806292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 21:13:59.806324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1014 21:13:59.896602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 21:13:59.896742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1014 21:14:00.234382       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 21:14:00.234492       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 21:14:02.093979       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 21:14:15 old-k8s-version-624341 kubelet[1360]: I1014 21:14:15.220091    1360 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.038122    1360 topology_manager.go:215] "Topology Admit Handler" podUID="cfdb0de4-da34-4c94-9144-7d5457893dcb" podNamespace="kube-system" podName="kube-proxy-v2k8v"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.052896    1360 topology_manager.go:215] "Topology Admit Handler" podUID="18cf142c-6ae3-4a6e-83fa-4fc72518a4fd" podNamespace="kube-system" podName="kindnet-5ffrd"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.088837    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cfdb0de4-da34-4c94-9144-7d5457893dcb-kube-proxy\") pod \"kube-proxy-v2k8v\" (UID: \"cfdb0de4-da34-4c94-9144-7d5457893dcb\") " pod="kube-system/kube-proxy-v2k8v"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.088949    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfdb0de4-da34-4c94-9144-7d5457893dcb-lib-modules\") pod \"kube-proxy-v2k8v\" (UID: \"cfdb0de4-da34-4c94-9144-7d5457893dcb\") " pod="kube-system/kube-proxy-v2k8v"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.088974    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/18cf142c-6ae3-4a6e-83fa-4fc72518a4fd-cni-cfg\") pod \"kindnet-5ffrd\" (UID: \"18cf142c-6ae3-4a6e-83fa-4fc72518a4fd\") " pod="kube-system/kindnet-5ffrd"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.089026    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18cf142c-6ae3-4a6e-83fa-4fc72518a4fd-xtables-lock\") pod \"kindnet-5ffrd\" (UID: \"18cf142c-6ae3-4a6e-83fa-4fc72518a4fd\") " pod="kube-system/kindnet-5ffrd"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.089053    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfdb0de4-da34-4c94-9144-7d5457893dcb-xtables-lock\") pod \"kube-proxy-v2k8v\" (UID: \"cfdb0de4-da34-4c94-9144-7d5457893dcb\") " pod="kube-system/kube-proxy-v2k8v"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.089109    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrxb\" (UniqueName: \"kubernetes.io/projected/cfdb0de4-da34-4c94-9144-7d5457893dcb-kube-api-access-wcrxb\") pod \"kube-proxy-v2k8v\" (UID: \"cfdb0de4-da34-4c94-9144-7d5457893dcb\") " pod="kube-system/kube-proxy-v2k8v"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.089134    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18cf142c-6ae3-4a6e-83fa-4fc72518a4fd-lib-modules\") pod \"kindnet-5ffrd\" (UID: \"18cf142c-6ae3-4a6e-83fa-4fc72518a4fd\") " pod="kube-system/kindnet-5ffrd"
	Oct 14 21:14:16 old-k8s-version-624341 kubelet[1360]: I1014 21:14:16.089195    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jkpt\" (UniqueName: \"kubernetes.io/projected/18cf142c-6ae3-4a6e-83fa-4fc72518a4fd-kube-api-access-4jkpt\") pod \"kindnet-5ffrd\" (UID: \"18cf142c-6ae3-4a6e-83fa-4fc72518a4fd\") " pod="kube-system/kindnet-5ffrd"
	Oct 14 21:14:19 old-k8s-version-624341 kubelet[1360]: I1014 21:14:19.584210    1360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v2k8v" podStartSLOduration=4.584156662 podCreationTimestamp="2025-10-14 21:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:14:16.597190012 +0000 UTC m=+14.395721050" watchObservedRunningTime="2025-10-14 21:14:19.584156662 +0000 UTC m=+17.382687684"
	Oct 14 21:14:22 old-k8s-version-624341 kubelet[1360]: I1014 21:14:22.531998    1360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5ffrd" podStartSLOduration=4.962835501 podCreationTimestamp="2025-10-14 21:14:15 +0000 UTC" firstStartedPulling="2025-10-14 21:14:16.407820251 +0000 UTC m=+14.206351273" lastFinishedPulling="2025-10-14 21:14:18.976914898 +0000 UTC m=+16.775445920" observedRunningTime="2025-10-14 21:14:19.584597021 +0000 UTC m=+17.383128051" watchObservedRunningTime="2025-10-14 21:14:22.531930148 +0000 UTC m=+20.330461170"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.896932    1360 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.927682    1360 topology_manager.go:215] "Topology Admit Handler" podUID="bd79c7b1-86f7-4049-bffa-08e050f93c5a" podNamespace="kube-system" podName="storage-provisioner"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.940134    1360 topology_manager.go:215] "Topology Admit Handler" podUID="dbce2894-73ff-4d99-8254-2f66d6848f2d" podNamespace="kube-system" podName="coredns-5dd5756b68-cqt59"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.991683    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwwn\" (UniqueName: \"kubernetes.io/projected/bd79c7b1-86f7-4049-bffa-08e050f93c5a-kube-api-access-4lwwn\") pod \"storage-provisioner\" (UID: \"bd79c7b1-86f7-4049-bffa-08e050f93c5a\") " pod="kube-system/storage-provisioner"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.991739    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbce2894-73ff-4d99-8254-2f66d6848f2d-config-volume\") pod \"coredns-5dd5756b68-cqt59\" (UID: \"dbce2894-73ff-4d99-8254-2f66d6848f2d\") " pod="kube-system/coredns-5dd5756b68-cqt59"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.991768    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd79c7b1-86f7-4049-bffa-08e050f93c5a-tmp\") pod \"storage-provisioner\" (UID: \"bd79c7b1-86f7-4049-bffa-08e050f93c5a\") " pod="kube-system/storage-provisioner"
	Oct 14 21:14:29 old-k8s-version-624341 kubelet[1360]: I1014 21:14:29.991821    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7l2\" (UniqueName: \"kubernetes.io/projected/dbce2894-73ff-4d99-8254-2f66d6848f2d-kube-api-access-qb7l2\") pod \"coredns-5dd5756b68-cqt59\" (UID: \"dbce2894-73ff-4d99-8254-2f66d6848f2d\") " pod="kube-system/coredns-5dd5756b68-cqt59"
	Oct 14 21:14:30 old-k8s-version-624341 kubelet[1360]: I1014 21:14:30.619355    1360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.619302004 podCreationTimestamp="2025-10-14 21:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:14:30.605587087 +0000 UTC m=+28.404118117" watchObservedRunningTime="2025-10-14 21:14:30.619302004 +0000 UTC m=+28.417833026"
	Oct 14 21:14:32 old-k8s-version-624341 kubelet[1360]: I1014 21:14:32.612618    1360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cqt59" podStartSLOduration=16.612561637 podCreationTimestamp="2025-10-14 21:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:14:30.619124393 +0000 UTC m=+28.417655423" watchObservedRunningTime="2025-10-14 21:14:32.612561637 +0000 UTC m=+30.411092675"
	Oct 14 21:14:32 old-k8s-version-624341 kubelet[1360]: I1014 21:14:32.613426    1360 topology_manager.go:215] "Topology Admit Handler" podUID="ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192" podNamespace="default" podName="busybox"
	Oct 14 21:14:32 old-k8s-version-624341 kubelet[1360]: I1014 21:14:32.710655    1360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtdf\" (UniqueName: \"kubernetes.io/projected/ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192-kube-api-access-zmtdf\") pod \"busybox\" (UID: \"ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192\") " pod="default/busybox"
	Oct 14 21:14:32 old-k8s-version-624341 kubelet[1360]: W1014 21:14:32.946114    1360 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/crio-e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4 WatchSource:0}: Error finding container e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4: Status 404 returned error can't find the container with id e40ec2fb41aa2f0eda214d6f48efad9b81a6567b3e3d355ac79068e76ee7c0b4
	
	
	==> storage-provisioner [a66c80693f62279b2fc6464f33737aa69aaed9ca192c4a8a98d85dd5eddf47d9] <==
	I1014 21:14:30.312279       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:14:30.352482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:14:30.352542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 21:14:30.395524       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:14:30.398160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_c7cfd765-7b3e-4bac-9ed1-15ff67686c49!
	I1014 21:14:30.404666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e06cdbc4-6782-498a-91de-d8fc4443f1e8", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-624341_c7cfd765-7b3e-4bac-9ed1-15ff67686c49 became leader
	I1014 21:14:30.498636       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_c7cfd765-7b3e-4bac-9ed1-15ff67686c49!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-624341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-624341 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-624341 --alsologtostderr -v=1: exit status 80 (2.198945037s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-624341 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:15:59.509265  733850 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:15:59.509439  733850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:59.509450  733850 out.go:374] Setting ErrFile to fd 2...
	I1014 21:15:59.509456  733850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:59.509716  733850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:15:59.509955  733850 out.go:368] Setting JSON to false
	I1014 21:15:59.509984  733850 mustload.go:65] Loading cluster: old-k8s-version-624341
	I1014 21:15:59.510385  733850 config.go:182] Loaded profile config "old-k8s-version-624341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1014 21:15:59.510848  733850 cli_runner.go:164] Run: docker container inspect old-k8s-version-624341 --format={{.State.Status}}
	I1014 21:15:59.534694  733850 host.go:66] Checking if "old-k8s-version-624341" exists ...
	I1014 21:15:59.535013  733850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:15:59.618777  733850 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-14 21:15:59.607620302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:15:59.619793  733850 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-624341 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:15:59.623640  733850 out.go:179] * Pausing node old-k8s-version-624341 ... 
	I1014 21:15:59.626674  733850 host.go:66] Checking if "old-k8s-version-624341" exists ...
	I1014 21:15:59.627041  733850 ssh_runner.go:195] Run: systemctl --version
	I1014 21:15:59.627100  733850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-624341
	I1014 21:15:59.653586  733850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/old-k8s-version-624341/id_rsa Username:docker}
	I1014 21:15:59.763385  733850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:15:59.787145  733850 pause.go:52] kubelet running: true
	I1014 21:15:59.787218  733850 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:16:00.144976  733850 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:16:00.145111  733850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:16:00.350807  733850 cri.go:89] found id: "ebfa0b922c57afc3fdaeb2803f737efb84c1994638ec9fd37a8a5b060474e3c1"
	I1014 21:16:00.350831  733850 cri.go:89] found id: "db04755ef37168b8f476efca5050b4572e1eeb8a4cc86296f34a701cad16040c"
	I1014 21:16:00.350835  733850 cri.go:89] found id: "ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d"
	I1014 21:16:00.350840  733850 cri.go:89] found id: "613c6d12dd0234b47511029fb2b5f65d7ad28eea15a421aca55df8b32d0742d9"
	I1014 21:16:00.350843  733850 cri.go:89] found id: "97affc6ef505a0056c2725bf977aae927fc4444687b58e22069669f20bca096d"
	I1014 21:16:00.350847  733850 cri.go:89] found id: "04640afe232d4e2d06f25f581c82ecc2b6b9a7fa0721a54724213eab98c897f9"
	I1014 21:16:00.350850  733850 cri.go:89] found id: "1947fbb61b56d1cf4961829e7eec2bc21178da55f70176cf65ede34915696c40"
	I1014 21:16:00.350853  733850 cri.go:89] found id: "d0db4c4eeb0f4f4601df4e42968b81010479fec3afca66413f91feb6bdfcc5d4"
	I1014 21:16:00.350855  733850 cri.go:89] found id: "f31f4eccb8ff5b77a90ad8c8dce9d271a1f4385ba9215f4a0d2b961573fd7aeb"
	I1014 21:16:00.350862  733850 cri.go:89] found id: "423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	I1014 21:16:00.350865  733850 cri.go:89] found id: "a275fcb41cc41d4f028e2ada689b325a86925c16d9ddca4963aad91b50607886"
	I1014 21:16:00.350868  733850 cri.go:89] found id: ""
	I1014 21:16:00.350921  733850 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:16:00.365770  733850 retry.go:31] will retry after 305.542956ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:16:00Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:16:00.672269  733850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:16:00.686589  733850 pause.go:52] kubelet running: false
	I1014 21:16:00.686657  733850 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:16:00.899651  733850 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:16:00.899723  733850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:16:00.986170  733850 cri.go:89] found id: "ebfa0b922c57afc3fdaeb2803f737efb84c1994638ec9fd37a8a5b060474e3c1"
	I1014 21:16:00.986190  733850 cri.go:89] found id: "db04755ef37168b8f476efca5050b4572e1eeb8a4cc86296f34a701cad16040c"
	I1014 21:16:00.986194  733850 cri.go:89] found id: "ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d"
	I1014 21:16:00.986198  733850 cri.go:89] found id: "613c6d12dd0234b47511029fb2b5f65d7ad28eea15a421aca55df8b32d0742d9"
	I1014 21:16:00.986201  733850 cri.go:89] found id: "97affc6ef505a0056c2725bf977aae927fc4444687b58e22069669f20bca096d"
	I1014 21:16:00.986205  733850 cri.go:89] found id: "04640afe232d4e2d06f25f581c82ecc2b6b9a7fa0721a54724213eab98c897f9"
	I1014 21:16:00.986208  733850 cri.go:89] found id: "1947fbb61b56d1cf4961829e7eec2bc21178da55f70176cf65ede34915696c40"
	I1014 21:16:00.986211  733850 cri.go:89] found id: "d0db4c4eeb0f4f4601df4e42968b81010479fec3afca66413f91feb6bdfcc5d4"
	I1014 21:16:00.986214  733850 cri.go:89] found id: "f31f4eccb8ff5b77a90ad8c8dce9d271a1f4385ba9215f4a0d2b961573fd7aeb"
	I1014 21:16:00.986220  733850 cri.go:89] found id: "423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	I1014 21:16:00.986223  733850 cri.go:89] found id: "a275fcb41cc41d4f028e2ada689b325a86925c16d9ddca4963aad91b50607886"
	I1014 21:16:00.986226  733850 cri.go:89] found id: ""
	I1014 21:16:00.986300  733850 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:16:01.001854  733850 retry.go:31] will retry after 244.910056ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:16:00Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:16:01.247317  733850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:16:01.264834  733850 pause.go:52] kubelet running: false
	I1014 21:16:01.264906  733850 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:16:01.502055  733850 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:16:01.502141  733850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:16:01.606066  733850 cri.go:89] found id: "ebfa0b922c57afc3fdaeb2803f737efb84c1994638ec9fd37a8a5b060474e3c1"
	I1014 21:16:01.606093  733850 cri.go:89] found id: "db04755ef37168b8f476efca5050b4572e1eeb8a4cc86296f34a701cad16040c"
	I1014 21:16:01.606098  733850 cri.go:89] found id: "ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d"
	I1014 21:16:01.606102  733850 cri.go:89] found id: "613c6d12dd0234b47511029fb2b5f65d7ad28eea15a421aca55df8b32d0742d9"
	I1014 21:16:01.606105  733850 cri.go:89] found id: "97affc6ef505a0056c2725bf977aae927fc4444687b58e22069669f20bca096d"
	I1014 21:16:01.606114  733850 cri.go:89] found id: "04640afe232d4e2d06f25f581c82ecc2b6b9a7fa0721a54724213eab98c897f9"
	I1014 21:16:01.606117  733850 cri.go:89] found id: "1947fbb61b56d1cf4961829e7eec2bc21178da55f70176cf65ede34915696c40"
	I1014 21:16:01.606120  733850 cri.go:89] found id: "d0db4c4eeb0f4f4601df4e42968b81010479fec3afca66413f91feb6bdfcc5d4"
	I1014 21:16:01.606123  733850 cri.go:89] found id: "f31f4eccb8ff5b77a90ad8c8dce9d271a1f4385ba9215f4a0d2b961573fd7aeb"
	I1014 21:16:01.606129  733850 cri.go:89] found id: "423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	I1014 21:16:01.606133  733850 cri.go:89] found id: "a275fcb41cc41d4f028e2ada689b325a86925c16d9ddca4963aad91b50607886"
	I1014 21:16:01.606135  733850 cri.go:89] found id: ""
	I1014 21:16:01.606184  733850 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:16:01.629409  733850 out.go:203] 
	W1014 21:16:01.632526  733850 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:16:01.632556  733850 out.go:285] * 
	* 
	W1014 21:16:01.641630  733850 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:16:01.644724  733850 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-624341 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-624341
helpers_test.go:243: (dbg) docker inspect old-k8s-version-624341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	        "Created": "2025-10-14T21:13:39.162471971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 729155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:14:56.770466154Z",
	            "FinishedAt": "2025-10-14T21:14:55.94155281Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hosts",
	        "LogPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e-json.log",
	        "Name": "/old-k8s-version-624341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-624341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-624341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	                "LowerDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-624341",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-624341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-624341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ee796a27b199ab53e2ea1fa41b976fae08c06c9a2f022091a48602b4da21fcb1",
	            "SandboxKey": "/var/run/docker/netns/ee796a27b199",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-624341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:66:dd:67:f7:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8335d3f3a3006af89042686dfaf481d24c09a8bd966d06b2f14f4f84c184d6d9",
	                    "EndpointID": "0d440c393d224ee0facd79ea7c429f48bd6035f49f931c3e459d5c18e2e84a9c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-624341",
	                        "15ab9b4ad277"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341: exit status 2 (480.434854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25: (1.723884085s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-810782 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo crio config                                                                                                                                                                                                             │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                                                                                                                                                              │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ delete  │ -p force-systemd-flag-162846                                                                                                                                                                                                                  │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	│ delete  │ -p force-systemd-env-347955                                                                                                                                                                                                                   │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:12 UTC │
	│ start   │ -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ cert-options-967887 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:15:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:15:48.504000  732597 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:15:48.504115  732597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:48.504126  732597 out.go:374] Setting ErrFile to fd 2...
	I1014 21:15:48.504132  732597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:48.504735  732597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:15:48.505297  732597 out.go:368] Setting JSON to false
	I1014 21:15:48.506313  732597 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14299,"bootTime":1760462250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:15:48.506406  732597 start.go:141] virtualization:  
	I1014 21:15:48.512651  732597 out.go:179] * [default-k8s-diff-port-319221] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:15:48.516409  732597 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:15:48.516488  732597 notify.go:220] Checking for updates...
	I1014 21:15:48.523859  732597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:15:48.527283  732597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:15:48.530524  732597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:15:48.533692  732597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:15:48.536886  732597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:15:48.540646  732597 config.go:182] Loaded profile config "old-k8s-version-624341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1014 21:15:48.540782  732597 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:15:48.574583  732597 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:15:48.574768  732597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:15:48.639512  732597 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:15:48.629866958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:15:48.639625  732597 docker.go:318] overlay module found
	I1014 21:15:48.642832  732597 out.go:179] * Using the docker driver based on user configuration
	I1014 21:15:48.645831  732597 start.go:305] selected driver: docker
	I1014 21:15:48.645853  732597 start.go:925] validating driver "docker" against <nil>
	I1014 21:15:48.645869  732597 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:15:48.646642  732597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:15:48.709817  732597 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:15:48.700729929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:15:48.709975  732597 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:15:48.710211  732597 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:15:48.713239  732597 out.go:179] * Using Docker driver with root privileges
	I1014 21:15:48.716112  732597 cni.go:84] Creating CNI manager for ""
	I1014 21:15:48.716187  732597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:15:48.716201  732597 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:15:48.716289  732597 start.go:349] cluster config:
	{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:15:48.721532  732597 out.go:179] * Starting "default-k8s-diff-port-319221" primary control-plane node in "default-k8s-diff-port-319221" cluster
	I1014 21:15:48.724437  732597 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:15:48.727357  732597 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:15:48.730392  732597 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:15:48.730471  732597 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:15:48.730483  732597 cache.go:58] Caching tarball of preloaded images
	I1014 21:15:48.730574  732597 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:15:48.730589  732597 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:15:48.730708  732597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:15:48.730732  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json: {Name:mkd951e8ad03e5af48ae4706453f268ecdb9d715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:15:48.730423  732597 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:15:48.750884  732597 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:15:48.750914  732597 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:15:48.750928  732597 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:15:48.750949  732597 start.go:360] acquireMachinesLock for default-k8s-diff-port-319221: {Name:mkd932431e6f0afcc7b0e6021b1702e8970f2634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:15:48.751056  732597 start.go:364] duration metric: took 87.419µs to acquireMachinesLock for "default-k8s-diff-port-319221"
	I1014 21:15:48.751087  732597 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:15:48.751158  732597 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:15:48.754590  732597 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:15:48.754811  732597 start.go:159] libmachine.API.Create for "default-k8s-diff-port-319221" (driver="docker")
	I1014 21:15:48.754860  732597 client.go:168] LocalClient.Create starting
	I1014 21:15:48.754948  732597 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:15:48.754986  732597 main.go:141] libmachine: Decoding PEM data...
	I1014 21:15:48.755006  732597 main.go:141] libmachine: Parsing certificate...
	I1014 21:15:48.755060  732597 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:15:48.755086  732597 main.go:141] libmachine: Decoding PEM data...
	I1014 21:15:48.755096  732597 main.go:141] libmachine: Parsing certificate...
	I1014 21:15:48.755447  732597 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-319221 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:15:48.772231  732597 cli_runner.go:211] docker network inspect default-k8s-diff-port-319221 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:15:48.772367  732597 network_create.go:284] running [docker network inspect default-k8s-diff-port-319221] to gather additional debugging logs...
	I1014 21:15:48.772389  732597 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-319221
	W1014 21:15:48.788203  732597 cli_runner.go:211] docker network inspect default-k8s-diff-port-319221 returned with exit code 1
	I1014 21:15:48.788236  732597 network_create.go:287] error running [docker network inspect default-k8s-diff-port-319221]: docker network inspect default-k8s-diff-port-319221: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-319221 not found
	I1014 21:15:48.788250  732597 network_create.go:289] output of [docker network inspect default-k8s-diff-port-319221]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-319221 not found
	
	** /stderr **
	I1014 21:15:48.788438  732597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:15:48.804995  732597 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:15:48.805394  732597 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:15:48.805753  732597 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:15:48.806199  732597 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bbc40}
	I1014 21:15:48.806223  732597 network_create.go:124] attempt to create docker network default-k8s-diff-port-319221 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:15:48.806286  732597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 default-k8s-diff-port-319221
	I1014 21:15:48.868676  732597 network_create.go:108] docker network default-k8s-diff-port-319221 192.168.76.0/24 created
	I1014 21:15:48.868705  732597 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-319221" container
	I1014 21:15:48.868796  732597 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:15:48.885017  732597 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-319221 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:15:48.902862  732597 oci.go:103] Successfully created a docker volume default-k8s-diff-port-319221
	I1014 21:15:48.902965  732597 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-319221-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --entrypoint /usr/bin/test -v default-k8s-diff-port-319221:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:15:49.462841  732597 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-319221
	I1014 21:15:49.462890  732597 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:15:49.462910  732597 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:15:49.462989  732597 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-319221:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:15:54.412445  732597 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-319221:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.949401605s)
	I1014 21:15:54.412489  732597 kic.go:203] duration metric: took 4.949576573s to extract preloaded images to volume ...
	W1014 21:15:54.412634  732597 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:15:54.412749  732597 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:15:54.476132  732597 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-319221 --name default-k8s-diff-port-319221 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --network default-k8s-diff-port-319221 --ip 192.168.76.2 --volume default-k8s-diff-port-319221:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:15:54.775389  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Running}}
	I1014 21:15:54.800283  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:54.824589  732597 cli_runner.go:164] Run: docker exec default-k8s-diff-port-319221 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:15:54.877377  732597 oci.go:144] the created container "default-k8s-diff-port-319221" has a running status.
	I1014 21:15:54.877423  732597 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa...
	I1014 21:15:55.477914  732597 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:15:55.497766  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:55.514673  732597 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:15:55.514728  732597 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-319221 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:15:55.555116  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:55.573004  732597 machine.go:93] provisionDockerMachine start ...
	I1014 21:15:55.573106  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:55.590533  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:55.590889  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:55.590899  732597 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:15:55.591469  732597 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37446->127.0.0.1:33792: read: connection reset by peer
	I1014 21:15:58.738210  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:15:58.738234  732597 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-319221"
	I1014 21:15:58.738321  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:58.758324  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:58.758657  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:58.758677  732597 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-319221 && echo "default-k8s-diff-port-319221" | sudo tee /etc/hostname
	I1014 21:15:58.914448  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:15:58.914525  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:58.932893  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:58.933213  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:58.933238  732597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-319221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-319221/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-319221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:15:59.082627  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:15:59.082655  732597 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:15:59.082674  732597 ubuntu.go:190] setting up certificates
	I1014 21:15:59.082684  732597 provision.go:84] configureAuth start
	I1014 21:15:59.082745  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:15:59.109422  732597 provision.go:143] copyHostCerts
	I1014 21:15:59.109494  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:15:59.109517  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:15:59.109607  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:15:59.109700  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:15:59.109711  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:15:59.109737  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:15:59.109796  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:15:59.109805  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:15:59.109834  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:15:59.109883  732597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-319221 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-319221 localhost minikube]
	I1014 21:15:59.378630  732597 provision.go:177] copyRemoteCerts
	I1014 21:15:59.378772  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:15:59.378853  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:59.401955  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:15:59.525747  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:15:59.554195  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 21:15:59.584253  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 21:15:59.608871  732597 provision.go:87] duration metric: took 526.162366ms to configureAuth
	I1014 21:15:59.608894  732597 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:15:59.609079  732597 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:15:59.609194  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:59.630450  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:59.630763  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:59.630783  732597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:16:00.065874  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:16:00.065899  732597 machine.go:96] duration metric: took 4.492874342s to provisionDockerMachine
	I1014 21:16:00.065910  732597 client.go:171] duration metric: took 11.311038088s to LocalClient.Create
	I1014 21:16:00.065924  732597 start.go:167] duration metric: took 11.311115061s to libmachine.API.Create "default-k8s-diff-port-319221"
	I1014 21:16:00.065951  732597 start.go:293] postStartSetup for "default-k8s-diff-port-319221" (driver="docker")
	I1014 21:16:00.065963  732597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:16:00.066034  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:16:00.066078  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.162920  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.323638  732597 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:16:00.329138  732597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:16:00.329172  732597 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:16:00.329185  732597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:16:00.329254  732597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:16:00.329361  732597 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:16:00.329506  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:16:00.344697  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:16:00.383437  732597 start.go:296] duration metric: took 317.468308ms for postStartSetup
	I1014 21:16:00.383922  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:16:00.414648  732597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:16:00.415043  732597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:16:00.415097  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.439042  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.550314  732597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:16:00.555656  732597 start.go:128] duration metric: took 11.804482865s to createHost
	I1014 21:16:00.555684  732597 start.go:83] releasing machines lock for "default-k8s-diff-port-319221", held for 11.804614165s
	I1014 21:16:00.555768  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:16:00.580212  732597 ssh_runner.go:195] Run: cat /version.json
	I1014 21:16:00.580222  732597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:16:00.580299  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.580382  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.604926  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.612698  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.712646  732597 ssh_runner.go:195] Run: systemctl --version
	I1014 21:16:00.824077  732597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:16:00.884980  732597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:16:00.890322  732597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:16:00.890399  732597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:16:00.929264  732597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:16:00.929284  732597 start.go:495] detecting cgroup driver to use...
	I1014 21:16:00.929315  732597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:16:00.929362  732597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:16:00.952481  732597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:16:00.967506  732597 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:16:00.967614  732597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:16:00.984737  732597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:16:01.009001  732597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:16:01.135166  732597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:16:01.277572  732597 docker.go:234] disabling docker service ...
	I1014 21:16:01.277686  732597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:16:01.325788  732597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:16:01.349488  732597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:16:01.513531  732597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:16:01.669155  732597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:16:01.686973  732597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:16:01.705975  732597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:16:01.706032  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.715366  732597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:16:01.715435  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.724955  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.734386  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.743386  732597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:16:01.751872  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.760996  732597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.776810  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.786888  732597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:16:01.798531  732597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:16:01.810324  732597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:01.954007  732597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:16:02.112258  732597 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:16:02.112406  732597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:16:02.117063  732597 start.go:563] Will wait 60s for crictl version
	I1014 21:16:02.117126  732597 ssh_runner.go:195] Run: which crictl
	I1014 21:16:02.121115  732597 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:16:02.151441  732597 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:16:02.151542  732597 ssh_runner.go:195] Run: crio --version
	I1014 21:16:02.205897  732597 ssh_runner.go:195] Run: crio --version
	I1014 21:16:02.250965  732597 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.981357223Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26cd4bfd-c5de-49cf-a5c7-9d0ef50b4c8c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.982901573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33ba138f-856c-4b44-be4f-a5c9e38c41df name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.983883954Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=c7fc8031-e133-4a29-b4ca-1b1e5e7529b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.984225243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.001164838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.005010381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.039495683Z" level=info msg="Created container 423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=c7fc8031-e133-4a29-b4ca-1b1e5e7529b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.041358971Z" level=info msg="Starting container: 423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326" id=7f51461e-f76b-487d-b33b-586943b8aaab name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.045191173Z" level=info msg="Started container" PID=1637 containerID=423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper id=7f51461e-f76b-487d-b33b-586943b8aaab name=/runtime.v1.RuntimeService/StartContainer sandboxID=630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1
	Oct 14 21:15:43 old-k8s-version-624341 conmon[1635]: conmon 423ffa133d11035f313b <ninfo>: container 1637 exited with status 1
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.223026506Z" level=info msg="Removing container: b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.2458638Z" level=info msg="Error loading conmon cgroup of container b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5: cgroup deleted" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.249738603Z" level=info msg="Removed container b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.974336682Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981456667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981491146Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981513234Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987195259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987236457Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987306579Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993176242Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993208775Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993232193Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.996785359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.99748543Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	423ffa133d110       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   630c140ad9eb7       dashboard-metrics-scraper-5f989dc9cf-v4q8m       kubernetes-dashboard
	ebfa0b922c57a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   e18c804058051       storage-provisioner                              kube-system
	a275fcb41cc41       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   56132504337cc       kubernetes-dashboard-8694d4445c-w9nz4            kubernetes-dashboard
	db04755ef3716       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   4a07b9e7c30db       coredns-5dd5756b68-cqt59                         kube-system
	ba3691b44f3f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   e18c804058051       storage-provisioner                              kube-system
	613c6d12dd023       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   bd27f6c47d608       kindnet-5ffrd                                    kube-system
	2fd57fc137972       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   0e437e893efef       busybox                                          default
	97affc6ef505a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   9bc1e4a25f512       kube-proxy-v2k8v                                 kube-system
	04640afe232d4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   5f38cb6417a78       kube-scheduler-old-k8s-version-624341            kube-system
	1947fbb61b56d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   c6fe17cad36a6       kube-controller-manager-old-k8s-version-624341   kube-system
	d0db4c4eeb0f4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   754d7fb41e136       etcd-old-k8s-version-624341                      kube-system
	f31f4eccb8ff5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   625520acd1c69       kube-apiserver-old-k8s-version-624341            kube-system
	
	
	==> coredns [db04755ef37168b8f476efca5050b4572e1eeb8a4cc86296f34a701cad16040c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50856 - 8005 "HINFO IN 8160849112293248488.1898350068210291602. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014063927s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-624341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-624341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=old-k8s-version-624341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_14_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:13:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-624341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-624341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a9ddd38fe3d4f3bae60dc17786ee49a
	  System UUID:                32ff5aeb-1d61-4167-8fd8-805f888e4635
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-cqt59                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-624341                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-5ffrd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-624341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-624341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-v2k8v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-624341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-v4q8m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-w9nz4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-624341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-624341 event: Registered Node old-k8s-version-624341 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-624341 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-624341 event: Registered Node old-k8s-version-624341 in Controller
	
	
	==> dmesg <==
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d0db4c4eeb0f4f4601df4e42968b81010479fec3afca66413f91feb6bdfcc5d4] <==
	{"level":"info","ts":"2025-10-14T21:15:05.089091Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:15:05.089099Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:15:05.089318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-14T21:15:05.08937Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-14T21:15:05.089465Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:15:05.089493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:15:05.117341Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-14T21:15:05.121664Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-14T21:15:05.121694Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-14T21:15:05.12135Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:15:05.121728Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:15:06.313054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.320736Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-624341 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-14T21:15:06.322183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:15:06.323204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-14T21:15:06.325244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:15:06.326153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T21:15:06.326336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T21:15:06.326379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:16:03 up  3:58,  0 user,  load average: 3.11, 1.88, 1.72
	Linux old-k8s-version-624341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [613c6d12dd0234b47511029fb2b5f65d7ad28eea15a421aca55df8b32d0742d9] <==
	I1014 21:15:10.687660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:15:10.780184       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:15:10.780370       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:15:10.780384       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:15:10.780402       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:15:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:15:10.973550       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:15:10.973576       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:15:10.973584       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:15:10.974471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:15:40.974052       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:15:40.974272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:15:40.974361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:15:40.974448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:15:42.274293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:15:42.274335       1 metrics.go:72] Registering metrics
	I1014 21:15:42.274410       1 controller.go:711] "Syncing nftables rules"
	I1014 21:15:50.974037       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:15:50.974092       1 main.go:301] handling current node
	I1014 21:16:00.973479       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:16:00.973510       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f31f4eccb8ff5b77a90ad8c8dce9d271a1f4385ba9215f4a0d2b961573fd7aeb] <==
	I1014 21:15:09.016870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:15:09.046639       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1014 21:15:09.054615       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1014 21:15:09.061407       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1014 21:15:09.062730       1 shared_informer.go:318] Caches are synced for configmaps
	I1014 21:15:09.062911       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1014 21:15:09.062927       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1014 21:15:09.065304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:15:09.084706       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1014 21:15:09.084895       1 aggregator.go:166] initial CRD sync complete...
	I1014 21:15:09.086637       1 autoregister_controller.go:141] Starting autoregister controller
	I1014 21:15:09.086654       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 21:15:09.086662       1 cache.go:39] Caches are synced for autoregister controller
	E1014 21:15:09.166049       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:15:09.655323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:15:11.262589       1 controller.go:624] quota admission added evaluator for: namespaces
	I1014 21:15:11.322320       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1014 21:15:11.358245       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:15:11.375049       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:15:11.385401       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1014 21:15:11.445120       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.100.114"}
	I1014 21:15:11.466262       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.218.251"}
	I1014 21:15:21.507911       1 controller.go:624] quota admission added evaluator for: endpoints
	I1014 21:15:21.671015       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:15:21.718940       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1947fbb61b56d1cf4961829e7eec2bc21178da55f70176cf65ede34915696c40] <==
	I1014 21:15:21.660459       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1014 21:15:21.724499       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1014 21:15:21.728876       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1014 21:15:21.743201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-w9nz4"
	I1014 21:15:21.743327       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	I1014 21:15:21.775152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.591679ms"
	I1014 21:15:21.779768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.460255ms"
	I1014 21:15:21.795673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.499428ms"
	I1014 21:15:21.795808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.163µs"
	I1014 21:15:21.805942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.056466ms"
	I1014 21:15:21.806243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.315µs"
	I1014 21:15:21.806437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.557µs"
	I1014 21:15:21.821510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.655µs"
	I1014 21:15:22.069411       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:15:22.080617       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:15:22.080647       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1014 21:15:27.177792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="134.049µs"
	I1014 21:15:28.189558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.354µs"
	I1014 21:15:29.181825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.253µs"
	I1014 21:15:32.217602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.721818ms"
	I1014 21:15:32.218435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="602.158µs"
	I1014 21:15:43.232951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.904µs"
	I1014 21:15:45.788387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.690267ms"
	I1014 21:15:45.789345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.433µs"
	I1014 21:15:52.112470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.154µs"
	
	
	==> kube-proxy [97affc6ef505a0056c2725bf977aae927fc4444687b58e22069669f20bca096d] <==
	I1014 21:15:10.766398       1 server_others.go:69] "Using iptables proxy"
	I1014 21:15:10.787140       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1014 21:15:10.858618       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:15:10.863320       1 server_others.go:152] "Using iptables Proxier"
	I1014 21:15:10.863359       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1014 21:15:10.863372       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1014 21:15:10.863403       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1014 21:15:10.863642       1 server.go:846] "Version info" version="v1.28.0"
	I1014 21:15:10.863653       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:15:10.891903       1 config.go:188] "Starting service config controller"
	I1014 21:15:10.891924       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1014 21:15:10.891945       1 config.go:97] "Starting endpoint slice config controller"
	I1014 21:15:10.891948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1014 21:15:10.891961       1 config.go:315] "Starting node config controller"
	I1014 21:15:10.891965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1014 21:15:10.993560       1 shared_informer.go:318] Caches are synced for service config
	I1014 21:15:10.998172       1 shared_informer.go:318] Caches are synced for node config
	I1014 21:15:10.998200       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [04640afe232d4e2d06f25f581c82ecc2b6b9a7fa0721a54724213eab98c897f9] <==
	I1014 21:15:07.746196       1 serving.go:348] Generated self-signed cert in-memory
	I1014 21:15:09.843556       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1014 21:15:09.843591       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:15:09.863273       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1014 21:15:09.863308       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1014 21:15:09.863340       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:15:09.863350       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 21:15:09.863369       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:15:09.863374       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 21:15:09.864284       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1014 21:15:09.866629       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1014 21:15:09.964162       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 21:15:09.964220       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1014 21:15:09.964307       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858688     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3da1720-834f-433f-b807-e33a8df959f4-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-v4q8m\" (UID: \"a3da1720-834f-433f-b807-e33a8df959f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858746     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a5d607f-d276-4275-904a-eaad8181d089-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-w9nz4\" (UID: \"0a5d607f-d276-4275-904a-eaad8181d089\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858780     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc5ck\" (UniqueName: \"kubernetes.io/projected/a3da1720-834f-433f-b807-e33a8df959f4-kube-api-access-xc5ck\") pod \"dashboard-metrics-scraper-5f989dc9cf-v4q8m\" (UID: \"a3da1720-834f-433f-b807-e33a8df959f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858807     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75wg\" (UniqueName: \"kubernetes.io/projected/0a5d607f-d276-4275-904a-eaad8181d089-kube-api-access-r75wg\") pod \"kubernetes-dashboard-8694d4445c-w9nz4\" (UID: \"0a5d607f-d276-4275-904a-eaad8181d089\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4"
	Oct 14 21:15:22 old-k8s-version-624341 kubelet[777]: W1014 21:15:22.098112     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/crio-630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1 WatchSource:0}: Error finding container 630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1: Status 404 returned error can't find the container with id 630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1
	Oct 14 21:15:22 old-k8s-version-624341 kubelet[777]: W1014 21:15:22.115758     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/crio-56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb WatchSource:0}: Error finding container 56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb: Status 404 returned error can't find the container with id 56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb
	Oct 14 21:15:27 old-k8s-version-624341 kubelet[777]: I1014 21:15:27.155545     777 scope.go:117] "RemoveContainer" containerID="aecfc229626c355b888669c3fc61d117400e88d532edf9cd2c1e0af3b48a16f7"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: I1014 21:15:28.159203     777 scope.go:117] "RemoveContainer" containerID="aecfc229626c355b888669c3fc61d117400e88d532edf9cd2c1e0af3b48a16f7"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: I1014 21:15:28.159487     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: E1014 21:15:28.159745     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:29 old-k8s-version-624341 kubelet[777]: I1014 21:15:29.163249     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:29 old-k8s-version-624341 kubelet[777]: E1014 21:15:29.163522     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:32 old-k8s-version-624341 kubelet[777]: I1014 21:15:32.070550     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:32 old-k8s-version-624341 kubelet[777]: E1014 21:15:32.070947     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:41 old-k8s-version-624341 kubelet[777]: I1014 21:15:41.191484     777 scope.go:117] "RemoveContainer" containerID="ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d"
	Oct 14 21:15:41 old-k8s-version-624341 kubelet[777]: I1014 21:15:41.235102     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4" podStartSLOduration=10.48612101 podCreationTimestamp="2025-10-14 21:15:21 +0000 UTC" firstStartedPulling="2025-10-14 21:15:22.119628415 +0000 UTC m=+18.369573290" lastFinishedPulling="2025-10-14 21:15:31.868548768 +0000 UTC m=+28.118493651" observedRunningTime="2025-10-14 21:15:32.196059144 +0000 UTC m=+28.446004019" watchObservedRunningTime="2025-10-14 21:15:41.235041371 +0000 UTC m=+37.484986246"
	Oct 14 21:15:42 old-k8s-version-624341 kubelet[777]: I1014 21:15:42.980682     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: I1014 21:15:43.200786     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: I1014 21:15:43.201150     777 scope.go:117] "RemoveContainer" containerID="423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: E1014 21:15:43.201477     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:52 old-k8s-version-624341 kubelet[777]: I1014 21:15:52.070310     777 scope.go:117] "RemoveContainer" containerID="423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	Oct 14 21:15:52 old-k8s-version-624341 kubelet[777]: E1014 21:15:52.070658     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a275fcb41cc41d4f028e2ada689b325a86925c16d9ddca4963aad91b50607886] <==
	2025/10/14 21:15:31 Using namespace: kubernetes-dashboard
	2025/10/14 21:15:31 Using in-cluster config to connect to apiserver
	2025/10/14 21:15:31 Using secret token for csrf signing
	2025/10/14 21:15:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:15:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:15:31 Successful initial request to the apiserver, version: v1.28.0
	2025/10/14 21:15:31 Generating JWE encryption key
	2025/10/14 21:15:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:15:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:15:32 Initializing JWE encryption key from synchronized object
	2025/10/14 21:15:32 Creating in-cluster Sidecar client
	2025/10/14 21:15:32 Serving insecurely on HTTP port: 9090
	2025/10/14 21:15:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:16:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:15:31 Starting overwatch
	
	
	==> storage-provisioner [ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d] <==
	I1014 21:15:10.714727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:15:40.719240       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebfa0b922c57afc3fdaeb2803f737efb84c1994638ec9fd37a8a5b060474e3c1] <==
	I1014 21:15:41.285774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:15:41.309885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:15:41.316400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 21:15:58.728622       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:15:58.728816       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c!
	I1014 21:15:58.729851       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e06cdbc4-6782-498a-91de-d8fc4443f1e8", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c became leader
	I1014 21:15:58.829708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624341 -n old-k8s-version-624341: exit status 2 (491.382849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-624341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-624341
helpers_test.go:243: (dbg) docker inspect old-k8s-version-624341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	        "Created": "2025-10-14T21:13:39.162471971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 729155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:14:56.770466154Z",
	            "FinishedAt": "2025-10-14T21:14:55.94155281Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/hosts",
	        "LogPath": "/var/lib/docker/containers/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e-json.log",
	        "Name": "/old-k8s-version-624341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-624341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-624341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e",
	                "LowerDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df63c5996c488485ebb370ec3026a56632690e92fc52f35ca0adac5d4be4589c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-624341",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-624341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-624341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-624341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ee796a27b199ab53e2ea1fa41b976fae08c06c9a2f022091a48602b4da21fcb1",
	            "SandboxKey": "/var/run/docker/netns/ee796a27b199",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-624341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:66:dd:67:f7:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8335d3f3a3006af89042686dfaf481d24c09a8bd966d06b2f14f4f84c184d6d9",
	                    "EndpointID": "0d440c393d224ee0facd79ea7c429f48bd6035f49f931c3e459d5c18e2e84a9c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-624341",
	                        "15ab9b4ad277"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341: exit status 2 (434.103297ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-624341 logs -n 25: (1.624992133s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-810782 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ -p cilium-810782 sudo crio config                                                                                                                                                                                                             │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                                                                                                                                                              │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ delete  │ -p force-systemd-flag-162846                                                                                                                                                                                                                  │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	│ delete  │ -p force-systemd-env-347955                                                                                                                                                                                                                   │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:12 UTC │
	│ start   │ -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ cert-options-967887 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:15:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:15:48.504000  732597 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:15:48.504115  732597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:48.504126  732597 out.go:374] Setting ErrFile to fd 2...
	I1014 21:15:48.504132  732597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:15:48.504735  732597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:15:48.505297  732597 out.go:368] Setting JSON to false
	I1014 21:15:48.506313  732597 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14299,"bootTime":1760462250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:15:48.506406  732597 start.go:141] virtualization:  
	I1014 21:15:48.512651  732597 out.go:179] * [default-k8s-diff-port-319221] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:15:48.516409  732597 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:15:48.516488  732597 notify.go:220] Checking for updates...
	I1014 21:15:48.523859  732597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:15:48.527283  732597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:15:48.530524  732597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:15:48.533692  732597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:15:48.536886  732597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:15:48.540646  732597 config.go:182] Loaded profile config "old-k8s-version-624341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1014 21:15:48.540782  732597 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:15:48.574583  732597 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:15:48.574768  732597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:15:48.639512  732597 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:15:48.629866958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:15:48.639625  732597 docker.go:318] overlay module found
	I1014 21:15:48.642832  732597 out.go:179] * Using the docker driver based on user configuration
	I1014 21:15:48.645831  732597 start.go:305] selected driver: docker
	I1014 21:15:48.645853  732597 start.go:925] validating driver "docker" against <nil>
	I1014 21:15:48.645869  732597 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:15:48.646642  732597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:15:48.709817  732597 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:15:48.700729929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:15:48.709975  732597 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:15:48.710211  732597 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:15:48.713239  732597 out.go:179] * Using Docker driver with root privileges
	I1014 21:15:48.716112  732597 cni.go:84] Creating CNI manager for ""
	I1014 21:15:48.716187  732597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:15:48.716201  732597 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:15:48.716289  732597 start.go:349] cluster config:
	{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:15:48.721532  732597 out.go:179] * Starting "default-k8s-diff-port-319221" primary control-plane node in "default-k8s-diff-port-319221" cluster
	I1014 21:15:48.724437  732597 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:15:48.727357  732597 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:15:48.730392  732597 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:15:48.730471  732597 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:15:48.730483  732597 cache.go:58] Caching tarball of preloaded images
	I1014 21:15:48.730574  732597 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:15:48.730589  732597 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:15:48.730708  732597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:15:48.730732  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json: {Name:mkd951e8ad03e5af48ae4706453f268ecdb9d715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:15:48.730423  732597 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:15:48.750884  732597 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:15:48.750914  732597 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:15:48.750928  732597 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:15:48.750949  732597 start.go:360] acquireMachinesLock for default-k8s-diff-port-319221: {Name:mkd932431e6f0afcc7b0e6021b1702e8970f2634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:15:48.751056  732597 start.go:364] duration metric: took 87.419µs to acquireMachinesLock for "default-k8s-diff-port-319221"
	I1014 21:15:48.751087  732597 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:15:48.751158  732597 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:15:48.754590  732597 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:15:48.754811  732597 start.go:159] libmachine.API.Create for "default-k8s-diff-port-319221" (driver="docker")
	I1014 21:15:48.754860  732597 client.go:168] LocalClient.Create starting
	I1014 21:15:48.754948  732597 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:15:48.754986  732597 main.go:141] libmachine: Decoding PEM data...
	I1014 21:15:48.755006  732597 main.go:141] libmachine: Parsing certificate...
	I1014 21:15:48.755060  732597 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:15:48.755086  732597 main.go:141] libmachine: Decoding PEM data...
	I1014 21:15:48.755096  732597 main.go:141] libmachine: Parsing certificate...
	I1014 21:15:48.755447  732597 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-319221 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:15:48.772231  732597 cli_runner.go:211] docker network inspect default-k8s-diff-port-319221 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:15:48.772367  732597 network_create.go:284] running [docker network inspect default-k8s-diff-port-319221] to gather additional debugging logs...
	I1014 21:15:48.772389  732597 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-319221
	W1014 21:15:48.788203  732597 cli_runner.go:211] docker network inspect default-k8s-diff-port-319221 returned with exit code 1
	I1014 21:15:48.788236  732597 network_create.go:287] error running [docker network inspect default-k8s-diff-port-319221]: docker network inspect default-k8s-diff-port-319221: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-319221 not found
	I1014 21:15:48.788250  732597 network_create.go:289] output of [docker network inspect default-k8s-diff-port-319221]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-319221 not found
	
	** /stderr **
	I1014 21:15:48.788438  732597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:15:48.804995  732597 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:15:48.805394  732597 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:15:48.805753  732597 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:15:48.806199  732597 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bbc40}
	I1014 21:15:48.806223  732597 network_create.go:124] attempt to create docker network default-k8s-diff-port-319221 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:15:48.806286  732597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 default-k8s-diff-port-319221
	I1014 21:15:48.868676  732597 network_create.go:108] docker network default-k8s-diff-port-319221 192.168.76.0/24 created
	I1014 21:15:48.868705  732597 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-319221" container
	I1014 21:15:48.868796  732597 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:15:48.885017  732597 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-319221 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:15:48.902862  732597 oci.go:103] Successfully created a docker volume default-k8s-diff-port-319221
	I1014 21:15:48.902965  732597 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-319221-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --entrypoint /usr/bin/test -v default-k8s-diff-port-319221:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:15:49.462841  732597 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-319221
	I1014 21:15:49.462890  732597 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:15:49.462910  732597 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:15:49.462989  732597 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-319221:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:15:54.412445  732597 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-319221:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.949401605s)
	I1014 21:15:54.412489  732597 kic.go:203] duration metric: took 4.949576573s to extract preloaded images to volume ...
	W1014 21:15:54.412634  732597 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:15:54.412749  732597 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:15:54.476132  732597 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-319221 --name default-k8s-diff-port-319221 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-319221 --network default-k8s-diff-port-319221 --ip 192.168.76.2 --volume default-k8s-diff-port-319221:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:15:54.775389  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Running}}
	I1014 21:15:54.800283  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:54.824589  732597 cli_runner.go:164] Run: docker exec default-k8s-diff-port-319221 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:15:54.877377  732597 oci.go:144] the created container "default-k8s-diff-port-319221" has a running status.
	I1014 21:15:54.877423  732597 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa...
	I1014 21:15:55.477914  732597 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:15:55.497766  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:55.514673  732597 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:15:55.514728  732597 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-319221 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:15:55.555116  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:15:55.573004  732597 machine.go:93] provisionDockerMachine start ...
	I1014 21:15:55.573106  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:55.590533  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:55.590889  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:55.590899  732597 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:15:55.591469  732597 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37446->127.0.0.1:33792: read: connection reset by peer
	I1014 21:15:58.738210  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:15:58.738234  732597 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-319221"
	I1014 21:15:58.738321  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:58.758324  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:58.758657  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:58.758677  732597 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-319221 && echo "default-k8s-diff-port-319221" | sudo tee /etc/hostname
	I1014 21:15:58.914448  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:15:58.914525  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:58.932893  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:58.933213  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:58.933238  732597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-319221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-319221/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-319221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:15:59.082627  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:15:59.082655  732597 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:15:59.082674  732597 ubuntu.go:190] setting up certificates
	I1014 21:15:59.082684  732597 provision.go:84] configureAuth start
	I1014 21:15:59.082745  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:15:59.109422  732597 provision.go:143] copyHostCerts
	I1014 21:15:59.109494  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:15:59.109517  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:15:59.109607  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:15:59.109700  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:15:59.109711  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:15:59.109737  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:15:59.109796  732597 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:15:59.109805  732597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:15:59.109834  732597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:15:59.109883  732597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-319221 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-319221 localhost minikube]
	I1014 21:15:59.378630  732597 provision.go:177] copyRemoteCerts
	I1014 21:15:59.378772  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:15:59.378853  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:59.401955  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:15:59.525747  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:15:59.554195  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 21:15:59.584253  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 21:15:59.608871  732597 provision.go:87] duration metric: took 526.162366ms to configureAuth
	I1014 21:15:59.608894  732597 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:15:59.609079  732597 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:15:59.609194  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:15:59.630450  732597 main.go:141] libmachine: Using SSH client type: native
	I1014 21:15:59.630763  732597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1014 21:15:59.630783  732597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:16:00.065874  732597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:16:00.065899  732597 machine.go:96] duration metric: took 4.492874342s to provisionDockerMachine
	I1014 21:16:00.065910  732597 client.go:171] duration metric: took 11.311038088s to LocalClient.Create
	I1014 21:16:00.065924  732597 start.go:167] duration metric: took 11.311115061s to libmachine.API.Create "default-k8s-diff-port-319221"
	I1014 21:16:00.065951  732597 start.go:293] postStartSetup for "default-k8s-diff-port-319221" (driver="docker")
	I1014 21:16:00.065963  732597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:16:00.066034  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:16:00.066078  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.162920  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.323638  732597 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:16:00.329138  732597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:16:00.329172  732597 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:16:00.329185  732597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:16:00.329254  732597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:16:00.329361  732597 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:16:00.329506  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:16:00.344697  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:16:00.383437  732597 start.go:296] duration metric: took 317.468308ms for postStartSetup
	I1014 21:16:00.383922  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:16:00.414648  732597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:16:00.415043  732597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:16:00.415097  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.439042  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.550314  732597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:16:00.555656  732597 start.go:128] duration metric: took 11.804482865s to createHost
	I1014 21:16:00.555684  732597 start.go:83] releasing machines lock for "default-k8s-diff-port-319221", held for 11.804614165s
	I1014 21:16:00.555768  732597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:16:00.580212  732597 ssh_runner.go:195] Run: cat /version.json
	I1014 21:16:00.580222  732597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:16:00.580299  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.580382  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:00.604926  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.612698  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:00.712646  732597 ssh_runner.go:195] Run: systemctl --version
	I1014 21:16:00.824077  732597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:16:00.884980  732597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:16:00.890322  732597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:16:00.890399  732597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:16:00.929264  732597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:16:00.929284  732597 start.go:495] detecting cgroup driver to use...
	I1014 21:16:00.929315  732597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:16:00.929362  732597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:16:00.952481  732597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:16:00.967506  732597 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:16:00.967614  732597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:16:00.984737  732597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:16:01.009001  732597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:16:01.135166  732597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:16:01.277572  732597 docker.go:234] disabling docker service ...
	I1014 21:16:01.277686  732597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:16:01.325788  732597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:16:01.349488  732597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:16:01.513531  732597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:16:01.669155  732597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:16:01.686973  732597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:16:01.705975  732597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:16:01.706032  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.715366  732597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:16:01.715435  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.724955  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.734386  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.743386  732597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:16:01.751872  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.760996  732597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.776810  732597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:01.786888  732597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:16:01.798531  732597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:16:01.810324  732597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:01.954007  732597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:16:02.112258  732597 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:16:02.112406  732597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:16:02.117063  732597 start.go:563] Will wait 60s for crictl version
	I1014 21:16:02.117126  732597 ssh_runner.go:195] Run: which crictl
	I1014 21:16:02.121115  732597 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:16:02.151441  732597 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:16:02.151542  732597 ssh_runner.go:195] Run: crio --version
	I1014 21:16:02.205897  732597 ssh_runner.go:195] Run: crio --version
	I1014 21:16:02.250965  732597 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:16:02.253993  732597 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-319221 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:16:02.275769  732597 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:16:02.279997  732597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:16:02.291543  732597 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:16:02.291657  732597 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:16:02.291720  732597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:16:02.350047  732597 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:16:02.350075  732597 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:16:02.350139  732597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:16:02.401084  732597 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:16:02.401103  732597 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:16:02.401110  732597 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1014 21:16:02.401194  732597 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-319221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:16:02.401273  732597 ssh_runner.go:195] Run: crio config
	I1014 21:16:02.490691  732597 cni.go:84] Creating CNI manager for ""
	I1014 21:16:02.490764  732597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:16:02.490799  732597 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:16:02.490856  732597 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-319221 NodeName:default-k8s-diff-port-319221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:16:02.491014  732597 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-319221"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:16:02.491099  732597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:16:02.499699  732597 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:16:02.499815  732597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:16:02.508387  732597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1014 21:16:02.525079  732597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:16:02.540757  732597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1014 21:16:02.555438  732597 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:16:02.559765  732597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:16:02.571599  732597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:02.742357  732597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:16:02.778287  732597 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221 for IP: 192.168.76.2
	I1014 21:16:02.778318  732597 certs.go:195] generating shared ca certs ...
	I1014 21:16:02.778334  732597 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:02.778502  732597 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:16:02.778569  732597 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:16:02.778582  732597 certs.go:257] generating profile certs ...
	I1014 21:16:02.778651  732597 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.key
	I1014 21:16:02.778668  732597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt with IP's: []
	I1014 21:16:03.068909  732597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt ...
	I1014 21:16:03.068988  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt: {Name:mk6077b127861a1c4075f5345fe326ecc2646b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:03.069232  732597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.key ...
	I1014 21:16:03.069276  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.key: {Name:mk401d62be0552e495284e3ce4852f607ee9daa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:03.069416  732597 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key.a60749bb
	I1014 21:16:03.069490  732597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt.a60749bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1014 21:16:03.283970  732597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt.a60749bb ...
	I1014 21:16:03.284054  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt.a60749bb: {Name:mk4e4c8812e56c8e58f21493d40edfe99879f82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:03.284257  732597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key.a60749bb ...
	I1014 21:16:03.284304  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key.a60749bb: {Name:mk5d3d339172d25bcda54d613dd7069e077457c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:03.284447  732597 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt.a60749bb -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt
	I1014 21:16:03.284570  732597 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key.a60749bb -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key
	I1014 21:16:03.284666  732597 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.key
	I1014 21:16:03.284710  732597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.crt with IP's: []
	I1014 21:16:04.005028  732597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.crt ...
	I1014 21:16:04.005058  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.crt: {Name:mk2c4202ff8ac13f2bbfe16248d1f4bc3a77eda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:04.005277  732597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.key ...
	I1014 21:16:04.005294  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.key: {Name:mk2f149de64e31319f81851dda6dabbcb09c660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:04.005487  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:16:04.005529  732597 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:16:04.005539  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:16:04.005563  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:16:04.005589  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:16:04.005613  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:16:04.005666  732597 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:16:04.006312  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:16:04.025764  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:16:04.045283  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:16:04.063500  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:16:04.081394  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 21:16:04.099164  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:16:04.117360  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:16:04.138084  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:16:04.160121  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:16:04.186223  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:16:04.209168  732597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:16:04.230140  732597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:16:04.244636  732597 ssh_runner.go:195] Run: openssl version
	I1014 21:16:04.254266  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:16:04.264009  732597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:16:04.268277  732597 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:16:04.268361  732597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:16:04.337925  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:16:04.350280  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:16:04.367491  732597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:04.381082  732597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:04.381154  732597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:04.447366  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:16:04.455607  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:16:04.463637  732597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:16:04.467945  732597 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:16:04.468013  732597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:16:04.509801  732597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:16:04.520016  732597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:16:04.524343  732597 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:16:04.524398  732597 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:16:04.524470  732597 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:16:04.524525  732597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:16:04.580936  732597 cri.go:89] found id: ""
	I1014 21:16:04.581046  732597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:16:04.590835  732597 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:16:04.598873  732597 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:16:04.598948  732597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:16:04.607527  732597 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:16:04.607545  732597 kubeadm.go:157] found existing configuration files:
	
	I1014 21:16:04.607596  732597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 21:16:04.615728  732597 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:16:04.615803  732597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:16:04.624529  732597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 21:16:04.632702  732597 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:16:04.632772  732597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:16:04.644717  732597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 21:16:04.658714  732597 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:16:04.658789  732597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:16:04.670559  732597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 21:16:04.680263  732597 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:16:04.680346  732597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:16:04.688534  732597 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:16:04.745068  732597 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:16:04.745480  732597 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:16:04.787336  732597 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:16:04.787421  732597 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:16:04.787460  732597 kubeadm.go:318] OS: Linux
	I1014 21:16:04.787516  732597 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:16:04.787571  732597 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:16:04.787621  732597 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:16:04.787672  732597 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:16:04.787723  732597 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:16:04.787773  732597 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:16:04.787821  732597 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:16:04.787871  732597 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:16:04.787920  732597 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:16:04.874179  732597 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:16:04.874294  732597 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:16:04.874392  732597 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:16:04.888255  732597 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.981357223Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26cd4bfd-c5de-49cf-a5c7-9d0ef50b4c8c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.982901573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33ba138f-856c-4b44-be4f-a5c9e38c41df name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.983883954Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=c7fc8031-e133-4a29-b4ca-1b1e5e7529b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:15:42 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:42.984225243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.001164838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.005010381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.039495683Z" level=info msg="Created container 423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=c7fc8031-e133-4a29-b4ca-1b1e5e7529b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.041358971Z" level=info msg="Starting container: 423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326" id=7f51461e-f76b-487d-b33b-586943b8aaab name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.045191173Z" level=info msg="Started container" PID=1637 containerID=423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper id=7f51461e-f76b-487d-b33b-586943b8aaab name=/runtime.v1.RuntimeService/StartContainer sandboxID=630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1
	Oct 14 21:15:43 old-k8s-version-624341 conmon[1635]: conmon 423ffa133d11035f313b <ninfo>: container 1637 exited with status 1
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.223026506Z" level=info msg="Removing container: b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.2458638Z" level=info msg="Error loading conmon cgroup of container b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5: cgroup deleted" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:43 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:43.249738603Z" level=info msg="Removed container b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m/dashboard-metrics-scraper" id=3bcd12d8-d8fd-4ad2-a1af-3eff7326d377 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.974336682Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981456667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981491146Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.981513234Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987195259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987236457Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.987306579Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993176242Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993208775Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.993232193Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.996785359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:15:50 old-k8s-version-624341 crio[648]: time="2025-10-14T21:15:50.99748543Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	423ffa133d110       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   630c140ad9eb7       dashboard-metrics-scraper-5f989dc9cf-v4q8m       kubernetes-dashboard
	ebfa0b922c57a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   e18c804058051       storage-provisioner                              kube-system
	a275fcb41cc41       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   56132504337cc       kubernetes-dashboard-8694d4445c-w9nz4            kubernetes-dashboard
	db04755ef3716       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   4a07b9e7c30db       coredns-5dd5756b68-cqt59                         kube-system
	ba3691b44f3f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   e18c804058051       storage-provisioner                              kube-system
	613c6d12dd023       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   bd27f6c47d608       kindnet-5ffrd                                    kube-system
	2fd57fc137972       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   0e437e893efef       busybox                                          default
	97affc6ef505a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   9bc1e4a25f512       kube-proxy-v2k8v                                 kube-system
	04640afe232d4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   5f38cb6417a78       kube-scheduler-old-k8s-version-624341            kube-system
	1947fbb61b56d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   c6fe17cad36a6       kube-controller-manager-old-k8s-version-624341   kube-system
	d0db4c4eeb0f4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   754d7fb41e136       etcd-old-k8s-version-624341                      kube-system
	f31f4eccb8ff5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   625520acd1c69       kube-apiserver-old-k8s-version-624341            kube-system
	
	
	==> coredns [db04755ef37168b8f476efca5050b4572e1eeb8a4cc86296f34a701cad16040c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50856 - 8005 "HINFO IN 8160849112293248488.1898350068210291602. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014063927s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-624341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-624341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=old-k8s-version-624341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_14_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:13:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-624341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:13:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:15:39 +0000   Tue, 14 Oct 2025 21:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-624341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a9ddd38fe3d4f3bae60dc17786ee49a
	  System UUID:                32ff5aeb-1d61-4167-8fd8-805f888e4635
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-cqt59                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-624341                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-5ffrd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-624341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-624341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-v2k8v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-624341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-v4q8m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-w9nz4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-624341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-624341 event: Registered Node old-k8s-version-624341 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-624341 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 63s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 63s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 63s)  kubelet          Node old-k8s-version-624341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-624341 event: Registered Node old-k8s-version-624341 in Controller
	
	
	==> dmesg <==
	[Oct14 20:38] overlayfs: idmapped layers are currently not supported
	[Oct14 20:43] overlayfs: idmapped layers are currently not supported
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d0db4c4eeb0f4f4601df4e42968b81010479fec3afca66413f91feb6bdfcc5d4] <==
	{"level":"info","ts":"2025-10-14T21:15:05.089091Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:15:05.089099Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-14T21:15:05.089318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-14T21:15:05.08937Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-14T21:15:05.089465Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:15:05.089493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T21:15:05.117341Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-14T21:15:05.121664Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-14T21:15:05.121694Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-14T21:15:05.12135Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:15:05.121728Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-14T21:15:06.313054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-14T21:15:06.313147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.313184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-14T21:15:06.320736Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-624341 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-14T21:15:06.322183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:15:06.323204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-14T21:15:06.325244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T21:15:06.326153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T21:15:06.326336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T21:15:06.326379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:16:06 up  3:58,  0 user,  load average: 3.11, 1.88, 1.72
	Linux old-k8s-version-624341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [613c6d12dd0234b47511029fb2b5f65d7ad28eea15a421aca55df8b32d0742d9] <==
	I1014 21:15:10.687660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:15:10.780184       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:15:10.780370       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:15:10.780384       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:15:10.780402       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:15:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:15:10.973550       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:15:10.973576       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:15:10.973584       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:15:10.974471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:15:40.974052       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:15:40.974272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:15:40.974361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:15:40.974448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:15:42.274293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:15:42.274335       1 metrics.go:72] Registering metrics
	I1014 21:15:42.274410       1 controller.go:711] "Syncing nftables rules"
	I1014 21:15:50.974037       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:15:50.974092       1 main.go:301] handling current node
	I1014 21:16:00.973479       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:16:00.973510       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f31f4eccb8ff5b77a90ad8c8dce9d271a1f4385ba9215f4a0d2b961573fd7aeb] <==
	I1014 21:15:09.016870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:15:09.046639       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1014 21:15:09.054615       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1014 21:15:09.061407       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1014 21:15:09.062730       1 shared_informer.go:318] Caches are synced for configmaps
	I1014 21:15:09.062911       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1014 21:15:09.062927       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1014 21:15:09.065304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:15:09.084706       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1014 21:15:09.084895       1 aggregator.go:166] initial CRD sync complete...
	I1014 21:15:09.086637       1 autoregister_controller.go:141] Starting autoregister controller
	I1014 21:15:09.086654       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 21:15:09.086662       1 cache.go:39] Caches are synced for autoregister controller
	E1014 21:15:09.166049       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:15:09.655323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:15:11.262589       1 controller.go:624] quota admission added evaluator for: namespaces
	I1014 21:15:11.322320       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1014 21:15:11.358245       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:15:11.375049       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:15:11.385401       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1014 21:15:11.445120       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.100.114"}
	I1014 21:15:11.466262       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.218.251"}
	I1014 21:15:21.507911       1 controller.go:624] quota admission added evaluator for: endpoints
	I1014 21:15:21.671015       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:15:21.718940       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1947fbb61b56d1cf4961829e7eec2bc21178da55f70176cf65ede34915696c40] <==
	I1014 21:15:21.660459       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1014 21:15:21.724499       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1014 21:15:21.728876       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1014 21:15:21.743201       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-w9nz4"
	I1014 21:15:21.743327       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	I1014 21:15:21.775152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.591679ms"
	I1014 21:15:21.779768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.460255ms"
	I1014 21:15:21.795673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.499428ms"
	I1014 21:15:21.795808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.163µs"
	I1014 21:15:21.805942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.056466ms"
	I1014 21:15:21.806243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.315µs"
	I1014 21:15:21.806437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.557µs"
	I1014 21:15:21.821510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.655µs"
	I1014 21:15:22.069411       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:15:22.080617       1 shared_informer.go:318] Caches are synced for garbage collector
	I1014 21:15:22.080647       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1014 21:15:27.177792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="134.049µs"
	I1014 21:15:28.189558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.354µs"
	I1014 21:15:29.181825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.253µs"
	I1014 21:15:32.217602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.721818ms"
	I1014 21:15:32.218435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="602.158µs"
	I1014 21:15:43.232951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.904µs"
	I1014 21:15:45.788387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.690267ms"
	I1014 21:15:45.789345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.433µs"
	I1014 21:15:52.112470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.154µs"
	
	
	==> kube-proxy [97affc6ef505a0056c2725bf977aae927fc4444687b58e22069669f20bca096d] <==
	I1014 21:15:10.766398       1 server_others.go:69] "Using iptables proxy"
	I1014 21:15:10.787140       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1014 21:15:10.858618       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:15:10.863320       1 server_others.go:152] "Using iptables Proxier"
	I1014 21:15:10.863359       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1014 21:15:10.863372       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1014 21:15:10.863403       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1014 21:15:10.863642       1 server.go:846] "Version info" version="v1.28.0"
	I1014 21:15:10.863653       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:15:10.891903       1 config.go:188] "Starting service config controller"
	I1014 21:15:10.891924       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1014 21:15:10.891945       1 config.go:97] "Starting endpoint slice config controller"
	I1014 21:15:10.891948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1014 21:15:10.891961       1 config.go:315] "Starting node config controller"
	I1014 21:15:10.891965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1014 21:15:10.993560       1 shared_informer.go:318] Caches are synced for service config
	I1014 21:15:10.998172       1 shared_informer.go:318] Caches are synced for node config
	I1014 21:15:10.998200       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [04640afe232d4e2d06f25f581c82ecc2b6b9a7fa0721a54724213eab98c897f9] <==
	I1014 21:15:07.746196       1 serving.go:348] Generated self-signed cert in-memory
	I1014 21:15:09.843556       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1014 21:15:09.843591       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:15:09.863273       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1014 21:15:09.863308       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1014 21:15:09.863340       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:15:09.863350       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 21:15:09.863369       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:15:09.863374       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 21:15:09.864284       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1014 21:15:09.866629       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1014 21:15:09.964162       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 21:15:09.964220       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1014 21:15:09.964307       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858688     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3da1720-834f-433f-b807-e33a8df959f4-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-v4q8m\" (UID: \"a3da1720-834f-433f-b807-e33a8df959f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858746     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0a5d607f-d276-4275-904a-eaad8181d089-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-w9nz4\" (UID: \"0a5d607f-d276-4275-904a-eaad8181d089\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858780     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc5ck\" (UniqueName: \"kubernetes.io/projected/a3da1720-834f-433f-b807-e33a8df959f4-kube-api-access-xc5ck\") pod \"dashboard-metrics-scraper-5f989dc9cf-v4q8m\" (UID: \"a3da1720-834f-433f-b807-e33a8df959f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m"
	Oct 14 21:15:21 old-k8s-version-624341 kubelet[777]: I1014 21:15:21.858807     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75wg\" (UniqueName: \"kubernetes.io/projected/0a5d607f-d276-4275-904a-eaad8181d089-kube-api-access-r75wg\") pod \"kubernetes-dashboard-8694d4445c-w9nz4\" (UID: \"0a5d607f-d276-4275-904a-eaad8181d089\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4"
	Oct 14 21:15:22 old-k8s-version-624341 kubelet[777]: W1014 21:15:22.098112     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/crio-630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1 WatchSource:0}: Error finding container 630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1: Status 404 returned error can't find the container with id 630c140ad9eb72637d5ed49c216f0884b67664458499df281b7a5180e29b95b1
	Oct 14 21:15:22 old-k8s-version-624341 kubelet[777]: W1014 21:15:22.115758     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15ab9b4ad277d5deec5c7ce3344f33664be7d4af7cc3e007fb7082df1146d74e/crio-56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb WatchSource:0}: Error finding container 56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb: Status 404 returned error can't find the container with id 56132504337cce2b3db587ecdc291729c2845b1ee9d8ce85ceadd1b89938dfeb
	Oct 14 21:15:27 old-k8s-version-624341 kubelet[777]: I1014 21:15:27.155545     777 scope.go:117] "RemoveContainer" containerID="aecfc229626c355b888669c3fc61d117400e88d532edf9cd2c1e0af3b48a16f7"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: I1014 21:15:28.159203     777 scope.go:117] "RemoveContainer" containerID="aecfc229626c355b888669c3fc61d117400e88d532edf9cd2c1e0af3b48a16f7"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: I1014 21:15:28.159487     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:28 old-k8s-version-624341 kubelet[777]: E1014 21:15:28.159745     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:29 old-k8s-version-624341 kubelet[777]: I1014 21:15:29.163249     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:29 old-k8s-version-624341 kubelet[777]: E1014 21:15:29.163522     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:32 old-k8s-version-624341 kubelet[777]: I1014 21:15:32.070550     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:32 old-k8s-version-624341 kubelet[777]: E1014 21:15:32.070947     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:41 old-k8s-version-624341 kubelet[777]: I1014 21:15:41.191484     777 scope.go:117] "RemoveContainer" containerID="ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d"
	Oct 14 21:15:41 old-k8s-version-624341 kubelet[777]: I1014 21:15:41.235102     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w9nz4" podStartSLOduration=10.48612101 podCreationTimestamp="2025-10-14 21:15:21 +0000 UTC" firstStartedPulling="2025-10-14 21:15:22.119628415 +0000 UTC m=+18.369573290" lastFinishedPulling="2025-10-14 21:15:31.868548768 +0000 UTC m=+28.118493651" observedRunningTime="2025-10-14 21:15:32.196059144 +0000 UTC m=+28.446004019" watchObservedRunningTime="2025-10-14 21:15:41.235041371 +0000 UTC m=+37.484986246"
	Oct 14 21:15:42 old-k8s-version-624341 kubelet[777]: I1014 21:15:42.980682     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: I1014 21:15:43.200786     777 scope.go:117] "RemoveContainer" containerID="b3900a852bc7de6cf6eac77094590c29b59a5fe3ca713b107dbd63f8f7f2f5c5"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: I1014 21:15:43.201150     777 scope.go:117] "RemoveContainer" containerID="423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	Oct 14 21:15:43 old-k8s-version-624341 kubelet[777]: E1014 21:15:43.201477     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:15:52 old-k8s-version-624341 kubelet[777]: I1014 21:15:52.070310     777 scope.go:117] "RemoveContainer" containerID="423ffa133d11035f313b9e89d597c4a8f5fd7b7a297f6259f78408ad42fbc326"
	Oct 14 21:15:52 old-k8s-version-624341 kubelet[777]: E1014 21:15:52.070658     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-v4q8m_kubernetes-dashboard(a3da1720-834f-433f-b807-e33a8df959f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-v4q8m" podUID="a3da1720-834f-433f-b807-e33a8df959f4"
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:16:00 old-k8s-version-624341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a275fcb41cc41d4f028e2ada689b325a86925c16d9ddca4963aad91b50607886] <==
	2025/10/14 21:15:31 Using namespace: kubernetes-dashboard
	2025/10/14 21:15:31 Using in-cluster config to connect to apiserver
	2025/10/14 21:15:31 Using secret token for csrf signing
	2025/10/14 21:15:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:15:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:15:31 Successful initial request to the apiserver, version: v1.28.0
	2025/10/14 21:15:31 Generating JWE encryption key
	2025/10/14 21:15:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:15:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:15:32 Initializing JWE encryption key from synchronized object
	2025/10/14 21:15:32 Creating in-cluster Sidecar client
	2025/10/14 21:15:32 Serving insecurely on HTTP port: 9090
	2025/10/14 21:15:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:16:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:15:31 Starting overwatch
	
	
	==> storage-provisioner [ba3691b44f3f8a89648561b1d482df4ea2019f75e96a10bbe93983037c866a5d] <==
	I1014 21:15:10.714727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:15:40.719240       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebfa0b922c57afc3fdaeb2803f737efb84c1994638ec9fd37a8a5b060474e3c1] <==
	I1014 21:15:41.285774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:15:41.309885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:15:41.316400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 21:15:58.728622       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:15:58.728816       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c!
	I1014 21:15:58.729851       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e06cdbc4-6782-498a-91de-d8fc4443f1e8", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c became leader
	I1014 21:15:58.829708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-624341_6ff066b9-1a1b-46cb-93da-00aec88f7b4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624341 -n old-k8s-version-624341
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624341 -n old-k8s-version-624341: exit status 2 (482.325671ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-624341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.659572ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-319221 describe deploy/metrics-server -n kube-system: exit status 1 (85.60647ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-319221 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-319221
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-319221:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	        "Created": "2025-10-14T21:15:54.491462539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 733025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:15:54.545781351Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hosts",
	        "LogPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3-json.log",
	        "Name": "/default-k8s-diff-port-319221",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-319221:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-319221",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	                "LowerDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/merged",
	                "UpperDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/diff",
	                "WorkDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-319221",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-319221/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-319221",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bff1556e2cf913946b90de9a3a2abf266e4f39b6e3541683d522d78ba911f745",
	            "SandboxKey": "/var/run/docker/netns/bff1556e2cf9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-319221": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:67:60:cb:bd:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a784e56ec6e33ab3395b1054e1c771fec07076ba24af08b06eab32f8be7d3fc1",
	                    "EndpointID": "796d22f5d0aeb48af4272e620e78b923ea97e6cf6bc40bde65bb7a8894ed865b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-319221",
	                        "09b53eb1fb5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25: (1.220508792s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-810782 sudo crio config                                                                                                                                                                                                             │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ delete  │ -p cilium-810782                                                                                                                                                                                                                              │ cilium-810782                │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │ 14 Oct 25 21:04 UTC │
	│ start   │ -p force-systemd-env-347955 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:04 UTC │                     │
	│ ssh     │ force-systemd-flag-162846 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ delete  │ -p force-systemd-flag-162846                                                                                                                                                                                                                  │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	│ delete  │ -p force-systemd-env-347955                                                                                                                                                                                                                   │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:12 UTC │
	│ start   │ -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ cert-options-967887 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:16:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:16:10.380651  735865 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:16:10.380764  735865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:16:10.380769  735865 out.go:374] Setting ErrFile to fd 2...
	I1014 21:16:10.380774  735865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:16:10.381017  735865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:16:10.381465  735865 out.go:368] Setting JSON to false
	I1014 21:16:10.382301  735865 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14321,"bootTime":1760462250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:16:10.382358  735865 start.go:141] virtualization:  
	I1014 21:16:10.386394  735865 out.go:179] * [embed-certs-650951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:16:10.390818  735865 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:16:10.391043  735865 notify.go:220] Checking for updates...
	I1014 21:16:10.397262  735865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:16:10.400377  735865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:16:10.403463  735865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:16:10.406495  735865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:16:10.409529  735865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:16:10.412975  735865 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:16:10.413153  735865 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:16:10.456665  735865 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:16:10.456793  735865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:16:10.552868  735865 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:16:10.538202084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:16:10.552973  735865 docker.go:318] overlay module found
	I1014 21:16:10.556165  735865 out.go:179] * Using the docker driver based on user configuration
	I1014 21:16:10.559008  735865 start.go:305] selected driver: docker
	I1014 21:16:10.559024  735865 start.go:925] validating driver "docker" against <nil>
	I1014 21:16:10.559037  735865 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:16:10.559737  735865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:16:10.663386  735865 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:16:10.65181769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:16:10.663590  735865 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:16:10.663908  735865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:16:10.667396  735865 out.go:179] * Using Docker driver with root privileges
	I1014 21:16:10.670353  735865 cni.go:84] Creating CNI manager for ""
	I1014 21:16:10.670435  735865 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:16:10.670445  735865 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:16:10.670542  735865 start.go:349] cluster config:
	{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:16:10.673641  735865 out.go:179] * Starting "embed-certs-650951" primary control-plane node in "embed-certs-650951" cluster
	I1014 21:16:10.676499  735865 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:16:10.679510  735865 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:16:10.682420  735865 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:16:10.682514  735865 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:16:10.682563  735865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:16:10.682800  735865 cache.go:58] Caching tarball of preloaded images
	I1014 21:16:10.682937  735865 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:16:10.682987  735865 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:16:10.683130  735865 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:16:10.683175  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json: {Name:mk420bed12f6aa3e5bdf38a581aed9fc75051bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:10.709583  735865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:16:10.709611  735865 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:16:10.709625  735865 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:16:10.709654  735865 start.go:360] acquireMachinesLock for embed-certs-650951: {Name:mkc5e4e33d326b637da386142871e636ddbb8a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:16:10.709761  735865 start.go:364] duration metric: took 86.475µs to acquireMachinesLock for "embed-certs-650951"
	I1014 21:16:10.709787  735865 start.go:93] Provisioning new machine with config: &{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:16:10.709864  735865 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:16:08.721358  732597 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:16:08.721518  732597 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-319221 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:16:09.078615  732597 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:16:10.016835  732597 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:16:10.664144  732597 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:16:10.664646  732597 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:16:11.271428  732597 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:16:11.846955  732597 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:16:12.444518  732597 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:16:12.662193  732597 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:16:13.024633  732597 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:16:13.024745  732597 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:16:13.024976  732597 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:16:13.030840  732597 out.go:252]   - Booting up control plane ...
	I1014 21:16:13.030958  732597 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:16:13.031047  732597 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:16:13.031121  732597 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:16:13.054805  732597 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:16:13.054921  732597 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:16:13.063852  732597 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:16:13.064046  732597 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:16:13.064103  732597 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:16:13.227164  732597 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:16:13.227297  732597 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:16:10.713308  735865 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:16:10.713598  735865 start.go:159] libmachine.API.Create for "embed-certs-650951" (driver="docker")
	I1014 21:16:10.713637  735865 client.go:168] LocalClient.Create starting
	I1014 21:16:10.713726  735865 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:16:10.713766  735865 main.go:141] libmachine: Decoding PEM data...
	I1014 21:16:10.713781  735865 main.go:141] libmachine: Parsing certificate...
	I1014 21:16:10.713840  735865 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:16:10.713859  735865 main.go:141] libmachine: Decoding PEM data...
	I1014 21:16:10.713868  735865 main.go:141] libmachine: Parsing certificate...
	I1014 21:16:10.714233  735865 cli_runner.go:164] Run: docker network inspect embed-certs-650951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:16:10.737914  735865 cli_runner.go:211] docker network inspect embed-certs-650951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:16:10.737994  735865 network_create.go:284] running [docker network inspect embed-certs-650951] to gather additional debugging logs...
	I1014 21:16:10.738011  735865 cli_runner.go:164] Run: docker network inspect embed-certs-650951
	W1014 21:16:10.758123  735865 cli_runner.go:211] docker network inspect embed-certs-650951 returned with exit code 1
	I1014 21:16:10.758161  735865 network_create.go:287] error running [docker network inspect embed-certs-650951]: docker network inspect embed-certs-650951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-650951 not found
	I1014 21:16:10.758176  735865 network_create.go:289] output of [docker network inspect embed-certs-650951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-650951 not found
	
	** /stderr **
	I1014 21:16:10.758297  735865 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:16:10.778792  735865 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:16:10.779192  735865 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:16:10.779571  735865 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:16:10.779888  735865 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a784e56ec6e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:b1:87:64:45:8e} reservation:<nil>}
	I1014 21:16:10.780513  735865 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0ad50}
	I1014 21:16:10.780563  735865 network_create.go:124] attempt to create docker network embed-certs-650951 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:16:10.780678  735865 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-650951 embed-certs-650951
	I1014 21:16:10.849732  735865 network_create.go:108] docker network embed-certs-650951 192.168.85.0/24 created
	I1014 21:16:10.849761  735865 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-650951" container
	I1014 21:16:10.849832  735865 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:16:10.866085  735865 cli_runner.go:164] Run: docker volume create embed-certs-650951 --label name.minikube.sigs.k8s.io=embed-certs-650951 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:16:10.889820  735865 oci.go:103] Successfully created a docker volume embed-certs-650951
	I1014 21:16:10.889904  735865 cli_runner.go:164] Run: docker run --rm --name embed-certs-650951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-650951 --entrypoint /usr/bin/test -v embed-certs-650951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:16:11.541698  735865 oci.go:107] Successfully prepared a docker volume embed-certs-650951
	I1014 21:16:11.541748  735865 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:16:11.541768  735865 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:16:11.541846  735865 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-650951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:16:14.728875  732597 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501322421s
	I1014 21:16:14.732110  732597 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:16:14.732339  732597 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1014 21:16:14.732598  732597 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:16:14.733397  732597 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:16:16.541213  735865 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-650951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.999329451s)
	I1014 21:16:16.541251  735865 kic.go:203] duration metric: took 4.999480624s to extract preloaded images to volume ...
	W1014 21:16:16.541384  735865 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:16:16.541492  735865 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:16:16.640337  735865 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-650951 --name embed-certs-650951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-650951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-650951 --network embed-certs-650951 --ip 192.168.85.2 --volume embed-certs-650951:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:16:17.061459  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Running}}
	I1014 21:16:17.092608  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:17.122038  735865 cli_runner.go:164] Run: docker exec embed-certs-650951 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:16:17.182429  735865 oci.go:144] the created container "embed-certs-650951" has a running status.
	I1014 21:16:17.182459  735865 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa...
	I1014 21:16:17.437513  735865 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:16:17.475678  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:17.503525  735865 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:16:17.503544  735865 kic_runner.go:114] Args: [docker exec --privileged embed-certs-650951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:16:17.569767  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:17.598830  735865 machine.go:93] provisionDockerMachine start ...
	I1014 21:16:17.598920  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:17.631304  735865 main.go:141] libmachine: Using SSH client type: native
	I1014 21:16:17.631656  735865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1014 21:16:17.631667  735865 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:16:17.632421  735865 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57844->127.0.0.1:33797: read: connection reset by peer
	I1014 21:16:19.049160  732597 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.31560407s
	I1014 21:16:20.505187  732597 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.77124232s
	I1014 21:16:22.236186  732597 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.503196448s
	I1014 21:16:22.259361  732597 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:16:22.292508  732597 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:16:22.308048  732597 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:16:22.308274  732597 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-319221 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:16:22.325013  732597 kubeadm.go:318] [bootstrap-token] Using token: v4ixu7.bt1wga1z11iagoxx
	I1014 21:16:22.327871  732597 out.go:252]   - Configuring RBAC rules ...
	I1014 21:16:22.328005  732597 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:16:22.340658  732597 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:16:22.352681  732597 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:16:22.354511  732597 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:16:22.362968  732597 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:16:22.366660  732597 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:16:22.646677  732597 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:16:23.102917  732597 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:16:23.646302  732597 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:16:23.647619  732597 kubeadm.go:318] 
	I1014 21:16:23.647706  732597 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:16:23.647714  732597 kubeadm.go:318] 
	I1014 21:16:23.647795  732597 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:16:23.647800  732597 kubeadm.go:318] 
	I1014 21:16:23.647827  732597 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:16:23.648053  732597 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:16:23.648111  732597 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:16:23.648116  732597 kubeadm.go:318] 
	I1014 21:16:23.648172  732597 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:16:23.648177  732597 kubeadm.go:318] 
	I1014 21:16:23.648227  732597 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:16:23.648231  732597 kubeadm.go:318] 
	I1014 21:16:23.648285  732597 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:16:23.648381  732597 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:16:23.648454  732597 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:16:23.648458  732597 kubeadm.go:318] 
	I1014 21:16:23.648546  732597 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:16:23.648626  732597 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:16:23.648631  732597 kubeadm.go:318] 
	I1014 21:16:23.648718  732597 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token v4ixu7.bt1wga1z11iagoxx \
	I1014 21:16:23.648826  732597 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:16:23.648847  732597 kubeadm.go:318] 	--control-plane 
	I1014 21:16:23.648852  732597 kubeadm.go:318] 
	I1014 21:16:23.648941  732597 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:16:23.648946  732597 kubeadm.go:318] 
	I1014 21:16:23.649031  732597 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token v4ixu7.bt1wga1z11iagoxx \
	I1014 21:16:23.649137  732597 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:16:23.654513  732597 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:16:23.654789  732597 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:16:23.654910  732597 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:16:23.654933  732597 cni.go:84] Creating CNI manager for ""
	I1014 21:16:23.654945  732597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:16:23.660186  732597 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:16:20.780663  735865 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:16:20.780688  735865 ubuntu.go:182] provisioning hostname "embed-certs-650951"
	I1014 21:16:20.780757  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:20.798315  735865 main.go:141] libmachine: Using SSH client type: native
	I1014 21:16:20.798637  735865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1014 21:16:20.798653  735865 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650951 && echo "embed-certs-650951" | sudo tee /etc/hostname
	I1014 21:16:20.955871  735865 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:16:20.955950  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:20.975512  735865 main.go:141] libmachine: Using SSH client type: native
	I1014 21:16:20.975831  735865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1014 21:16:20.975855  735865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650951/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:16:21.132830  735865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:16:21.132856  735865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:16:21.132916  735865 ubuntu.go:190] setting up certificates
	I1014 21:16:21.132927  735865 provision.go:84] configureAuth start
	I1014 21:16:21.133019  735865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:16:21.152283  735865 provision.go:143] copyHostCerts
	I1014 21:16:21.152381  735865 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:16:21.152391  735865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:16:21.152469  735865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:16:21.152573  735865 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:16:21.152584  735865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:16:21.152610  735865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:16:21.152672  735865 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:16:21.152682  735865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:16:21.152713  735865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:16:21.152773  735865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650951 san=[127.0.0.1 192.168.85.2 embed-certs-650951 localhost minikube]
	I1014 21:16:22.514444  735865 provision.go:177] copyRemoteCerts
	I1014 21:16:22.514519  735865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:16:22.514570  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:22.535581  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:22.640908  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:16:22.672878  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:16:22.694101  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:16:22.723288  735865 provision.go:87] duration metric: took 1.590342853s to configureAuth
	I1014 21:16:22.723315  735865 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:16:22.723504  735865 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:16:22.723615  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:22.751860  735865 main.go:141] libmachine: Using SSH client type: native
	I1014 21:16:22.752149  735865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1014 21:16:22.752173  735865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:16:23.063677  735865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:16:23.063746  735865 machine.go:96] duration metric: took 5.464894417s to provisionDockerMachine
	I1014 21:16:23.063771  735865 client.go:171] duration metric: took 12.350126665s to LocalClient.Create
	I1014 21:16:23.063804  735865 start.go:167] duration metric: took 12.350206025s to libmachine.API.Create "embed-certs-650951"
	I1014 21:16:23.063845  735865 start.go:293] postStartSetup for "embed-certs-650951" (driver="docker")
	I1014 21:16:23.063869  735865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:16:23.063976  735865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:16:23.064041  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:23.089253  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:23.205676  735865 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:16:23.212852  735865 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:16:23.212885  735865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:16:23.212896  735865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:16:23.212952  735865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:16:23.213032  735865 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:16:23.213138  735865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:16:23.223800  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:16:23.250623  735865 start.go:296] duration metric: took 186.749938ms for postStartSetup
	I1014 21:16:23.251082  735865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:16:23.269814  735865 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:16:23.270085  735865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:16:23.270128  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:23.294179  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:23.414163  735865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:16:23.419621  735865 start.go:128] duration metric: took 12.709743039s to createHost
	I1014 21:16:23.419644  735865 start.go:83] releasing machines lock for "embed-certs-650951", held for 12.709874914s
	I1014 21:16:23.419728  735865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:16:23.438341  735865 ssh_runner.go:195] Run: cat /version.json
	I1014 21:16:23.438400  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:23.438668  735865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:16:23.438726  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:23.464104  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:23.475318  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:23.572371  735865 ssh_runner.go:195] Run: systemctl --version
	I1014 21:16:23.680125  735865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:16:23.742603  735865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:16:23.749015  735865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:16:23.749137  735865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:16:23.791303  735865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:16:23.791380  735865 start.go:495] detecting cgroup driver to use...
	I1014 21:16:23.791427  735865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:16:23.791498  735865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:16:23.810137  735865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:16:23.825444  735865 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:16:23.825575  735865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:16:23.844178  735865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:16:23.870176  735865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:16:24.032565  735865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:16:24.201654  735865 docker.go:234] disabling docker service ...
	I1014 21:16:24.201776  735865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:16:24.248077  735865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:16:24.265952  735865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:16:24.476679  735865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:16:24.656062  735865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:16:24.677464  735865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:16:24.693726  735865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:16:24.693794  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.706814  735865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:16:24.706885  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.717083  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.729409  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.738827  735865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:16:24.747234  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.755839  735865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.769397  735865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:16:24.778657  735865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:16:24.785963  735865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:16:24.793045  735865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:24.909091  735865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:16:25.060221  735865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:16:25.060388  735865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:16:25.064867  735865 start.go:563] Will wait 60s for crictl version
	I1014 21:16:25.064983  735865 ssh_runner.go:195] Run: which crictl
	I1014 21:16:25.068796  735865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:16:25.103432  735865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:16:25.103605  735865 ssh_runner.go:195] Run: crio --version
	I1014 21:16:25.157766  735865 ssh_runner.go:195] Run: crio --version
	I1014 21:16:25.201064  735865 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:16:25.204006  735865 cli_runner.go:164] Run: docker network inspect embed-certs-650951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:16:25.220941  735865 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:16:25.225475  735865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:16:25.235237  735865 kubeadm.go:883] updating cluster {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:16:25.235348  735865 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:16:25.235411  735865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:16:25.266735  735865 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:16:25.266759  735865 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:16:25.266814  735865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:16:25.293908  735865 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:16:25.293932  735865 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:16:25.293941  735865 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:16:25.294030  735865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-650951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:16:25.294114  735865 ssh_runner.go:195] Run: crio config
	I1014 21:16:25.352987  735865 cni.go:84] Creating CNI manager for ""
	I1014 21:16:25.353009  735865 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:16:25.353025  735865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:16:25.353069  735865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650951 NodeName:embed-certs-650951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:16:25.353249  735865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:16:25.353327  735865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:16:25.362363  735865 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:16:25.362435  735865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:16:25.370338  735865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1014 21:16:25.385940  735865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:16:25.399337  735865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1014 21:16:25.412146  735865 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:16:25.415850  735865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:16:25.428417  735865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:25.553492  735865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:16:25.574857  735865 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951 for IP: 192.168.85.2
	I1014 21:16:25.574879  735865 certs.go:195] generating shared ca certs ...
	I1014 21:16:25.574895  735865 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:25.575031  735865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:16:25.575081  735865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:16:25.575093  735865 certs.go:257] generating profile certs ...
	I1014 21:16:25.575163  735865 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.key
	I1014 21:16:25.575182  735865 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.crt with IP's: []
	I1014 21:16:26.221308  735865 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.crt ...
	I1014 21:16:26.221341  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.crt: {Name:mkd14d6d93ce65a1cbf7d71dbf555ca2fcd13af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.221583  735865 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.key ...
	I1014 21:16:26.221602  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.key: {Name:mk74d47f3f1eb81d4fba8da4adc4a4d75885e029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.221733  735865 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7
	I1014 21:16:26.221756  735865 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt.613231d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:16:26.698840  735865 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt.613231d7 ...
	I1014 21:16:26.698871  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt.613231d7: {Name:mk21ccce36d1e76fbf89b8cf9e6c764d8a80aab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.699043  735865 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7 ...
	I1014 21:16:26.699058  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7: {Name:mk7a674ac6b4b44b971555b2c8994a8c594006d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.699140  735865 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt.613231d7 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt
	I1014 21:16:26.699217  735865 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key
	I1014 21:16:26.699275  735865 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key
	I1014 21:16:26.699293  735865 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt with IP's: []
	I1014 21:16:26.856917  735865 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt ...
	I1014 21:16:26.856948  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt: {Name:mkf876e53daf5f8239feeca1c45c4c43a33d2f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.857123  735865 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key ...
	I1014 21:16:26.857137  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key: {Name:mka9fe0b1371da9517af7f9da3d88896220565e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:26.857328  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:16:26.857369  735865 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:16:26.857383  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:16:26.857406  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:16:26.857432  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:16:26.857457  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:16:26.857502  735865 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:16:26.858124  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:16:26.885194  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:16:26.906051  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:16:26.932044  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:16:26.956951  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 21:16:26.985226  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:16:27.011147  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:16:27.039480  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:16:27.063520  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:16:27.094446  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:16:27.124499  735865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:16:27.147963  735865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:16:27.162104  735865 ssh_runner.go:195] Run: openssl version
	I1014 21:16:27.172906  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:16:27.187109  735865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:16:27.194920  735865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:16:27.195042  735865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:16:27.236341  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:16:27.245003  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:16:27.254136  735865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:16:27.258175  735865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:16:27.258247  735865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:16:27.302937  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:16:27.312260  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:16:27.321114  735865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:27.325589  735865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:27.325654  735865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:16:27.368991  735865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:16:27.378422  735865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:16:27.382245  735865 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:16:27.382299  735865 kubeadm.go:400] StartCluster: {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:16:27.382382  735865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:16:27.382443  735865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:16:27.410024  735865 cri.go:89] found id: ""
	I1014 21:16:27.410105  735865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:16:27.418286  735865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:16:27.427578  735865 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:16:27.427668  735865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:16:27.436074  735865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:16:27.436097  735865 kubeadm.go:157] found existing configuration files:
	
	I1014 21:16:27.436178  735865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:16:27.444668  735865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:16:27.444845  735865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:16:27.452523  735865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:16:27.460995  735865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:16:27.461090  735865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:16:27.468682  735865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:16:27.476952  735865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:16:27.477016  735865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:16:27.484475  735865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:16:27.493008  735865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:16:27.493080  735865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:16:27.500675  735865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:16:27.547519  735865 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:16:27.547740  735865 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:16:27.596694  735865 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:16:27.596875  735865 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:16:27.596942  735865 kubeadm.go:318] OS: Linux
	I1014 21:16:27.597021  735865 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:16:27.597099  735865 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:16:27.597171  735865 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:16:27.597247  735865 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:16:27.597321  735865 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:16:27.597403  735865 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:16:27.597477  735865 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:16:27.597552  735865 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:16:27.597627  735865 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:16:27.688870  735865 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:16:27.689043  735865 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:16:27.689184  735865 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:16:27.700496  735865 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:16:23.663291  732597 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:16:23.667576  732597 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:16:23.667600  732597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:16:23.685771  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 21:16:24.138820  732597 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:16:24.138951  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:24.139018  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-319221 minikube.k8s.io/updated_at=2025_10_14T21_16_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=default-k8s-diff-port-319221 minikube.k8s.io/primary=true
	I1014 21:16:24.546255  732597 ops.go:34] apiserver oom_adj: -16
	I1014 21:16:24.546358  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:25.046925  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:25.546885  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:26.046486  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:26.546473  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:27.046472  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:27.547147  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:28.047169  732597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:28.298761  732597 kubeadm.go:1113] duration metric: took 4.159856225s to wait for elevateKubeSystemPrivileges
	I1014 21:16:28.298793  732597 kubeadm.go:402] duration metric: took 23.77439896s to StartCluster
	I1014 21:16:28.298813  732597 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:28.298875  732597 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:16:28.299531  732597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:28.299748  732597 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:16:28.299876  732597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:16:28.300116  732597 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:16:28.300161  732597 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:16:28.300226  732597 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-319221"
	I1014 21:16:28.300240  732597 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-319221"
	I1014 21:16:28.300267  732597 host.go:66] Checking if "default-k8s-diff-port-319221" exists ...
	I1014 21:16:28.300783  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:16:28.301402  732597 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-319221"
	I1014 21:16:28.301427  732597 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-319221"
	I1014 21:16:28.301801  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:16:28.304933  732597 out.go:179] * Verifying Kubernetes components...
	I1014 21:16:28.308877  732597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:28.340014  732597 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:16:28.348103  732597 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:16:28.348128  732597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:16:28.348202  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:28.349864  732597 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-319221"
	I1014 21:16:28.349907  732597 host.go:66] Checking if "default-k8s-diff-port-319221" exists ...
	I1014 21:16:28.350321  732597 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:16:28.387613  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:28.391596  732597 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:16:28.391620  732597 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:16:28.391680  732597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:16:28.419450  732597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:16:28.723120  732597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:16:28.766921  732597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:16:28.833793  732597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:16:28.833975  732597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:16:30.103681  732597 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.269671177s)
	I1014 21:16:30.104543  732597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.336648679s)
	I1014 21:16:30.105242  732597 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.271360628s)
	I1014 21:16:30.105271  732597 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1014 21:16:30.105333  732597 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-319221" to be "Ready" ...
	I1014 21:16:30.110280  732597 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 21:16:27.705425  735865 out.go:252]   - Generating certificates and keys ...
	I1014 21:16:27.705586  735865 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:16:27.705692  735865 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:16:28.613568  735865 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:16:29.249932  735865 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:16:29.714378  735865 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:16:30.221534  735865 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:16:30.113221  732597 addons.go:514] duration metric: took 1.813033206s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 21:16:30.614324  732597 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-319221" context rescaled to 1 replicas
	W1014 21:16:32.110574  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	I1014 21:16:30.927829  735865 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:16:30.928172  735865 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-650951 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:16:31.239756  735865 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:16:31.239936  735865 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-650951 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:16:31.339963  735865 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:16:31.896163  735865 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:16:33.078208  735865 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:16:33.078538  735865 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:16:33.560922  735865 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:16:33.947786  735865 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:16:34.390020  735865 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:16:34.623785  735865 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:16:34.822297  735865 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:16:34.823866  735865 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:16:34.826741  735865 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:16:34.830107  735865 out.go:252]   - Booting up control plane ...
	I1014 21:16:34.830228  735865 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:16:34.832167  735865 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:16:34.834681  735865 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:16:34.855552  735865 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:16:34.855668  735865 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:16:34.863439  735865 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:16:34.863959  735865 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:16:34.864254  735865 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:16:34.992109  735865 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:16:34.992235  735865 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1014 21:16:34.610313  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:36.612748  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	I1014 21:16:36.497756  735865 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.507284165s
	I1014 21:16:36.502561  735865 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:16:36.502678  735865 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:16:36.502775  735865 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:16:36.502866  735865 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:16:40.121161  735865 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.617268719s
	W1014 21:16:39.109240  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:41.610875  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	I1014 21:16:41.338696  735865 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.836164731s
	I1014 21:16:43.507162  735865 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.004520441s
	I1014 21:16:43.527723  735865 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:16:43.542931  735865 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:16:43.562333  735865 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:16:43.562542  735865 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-650951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:16:43.575791  735865 kubeadm.go:318] [bootstrap-token] Using token: tvhi8q.d7znly6t4apcj8je
	I1014 21:16:43.578681  735865 out.go:252]   - Configuring RBAC rules ...
	I1014 21:16:43.578813  735865 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:16:43.583970  735865 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:16:43.594510  735865 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:16:43.601496  735865 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:16:43.616146  735865 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:16:43.621742  735865 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:16:43.915004  735865 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:16:44.349715  735865 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:16:44.914561  735865 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:16:44.916088  735865 kubeadm.go:318] 
	I1014 21:16:44.916168  735865 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:16:44.916178  735865 kubeadm.go:318] 
	I1014 21:16:44.916259  735865 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:16:44.916269  735865 kubeadm.go:318] 
	I1014 21:16:44.916295  735865 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:16:44.916389  735865 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:16:44.916451  735865 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:16:44.916467  735865 kubeadm.go:318] 
	I1014 21:16:44.916523  735865 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:16:44.916533  735865 kubeadm.go:318] 
	I1014 21:16:44.916583  735865 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:16:44.916592  735865 kubeadm.go:318] 
	I1014 21:16:44.916646  735865 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:16:44.916729  735865 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:16:44.916806  735865 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:16:44.916815  735865 kubeadm.go:318] 
	I1014 21:16:44.916903  735865 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:16:44.916992  735865 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:16:44.917001  735865 kubeadm.go:318] 
	I1014 21:16:44.917090  735865 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tvhi8q.d7znly6t4apcj8je \
	I1014 21:16:44.917202  735865 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:16:44.917230  735865 kubeadm.go:318] 	--control-plane 
	I1014 21:16:44.917239  735865 kubeadm.go:318] 
	I1014 21:16:44.917327  735865 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:16:44.917336  735865 kubeadm.go:318] 
	I1014 21:16:44.917421  735865 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tvhi8q.d7znly6t4apcj8je \
	I1014 21:16:44.917532  735865 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:16:44.922907  735865 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:16:44.923145  735865 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:16:44.923262  735865 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:16:44.923282  735865 cni.go:84] Creating CNI manager for ""
	I1014 21:16:44.923290  735865 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:16:44.926477  735865 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:16:44.929322  735865 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:16:44.933615  735865 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:16:44.933648  735865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:16:44.947467  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1014 21:16:44.108923  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:46.108973  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:48.109535  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	I1014 21:16:45.414175  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:45.414287  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650951 minikube.k8s.io/updated_at=2025_10_14T21_16_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=embed-certs-650951 minikube.k8s.io/primary=true
	I1014 21:16:45.414345  735865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:16:45.563229  735865 ops.go:34] apiserver oom_adj: -16
	I1014 21:16:45.563360  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:46.063443  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:46.563565  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:47.064002  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:47.563968  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:48.064453  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:48.563993  735865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:16:48.665822  735865 kubeadm.go:1113] duration metric: took 3.251713329s to wait for elevateKubeSystemPrivileges
	I1014 21:16:48.665852  735865 kubeadm.go:402] duration metric: took 21.283556793s to StartCluster
	I1014 21:16:48.665869  735865 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:48.665928  735865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:16:48.667308  735865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:16:48.667534  735865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:16:48.667622  735865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:16:48.667872  735865 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:16:48.667914  735865 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:16:48.667978  735865 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650951"
	I1014 21:16:48.667999  735865 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-650951"
	I1014 21:16:48.668022  735865 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:16:48.668053  735865 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650951"
	I1014 21:16:48.668100  735865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650951"
	I1014 21:16:48.668467  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:48.668538  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:48.670998  735865 out.go:179] * Verifying Kubernetes components...
	I1014 21:16:48.674085  735865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:16:48.696663  735865 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:16:48.699887  735865 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:16:48.699907  735865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:16:48.699978  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:48.706084  735865 addons.go:238] Setting addon default-storageclass=true in "embed-certs-650951"
	I1014 21:16:48.706206  735865 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:16:48.706651  735865 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:16:48.735462  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:48.749020  735865 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:16:48.749042  735865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:16:48.749104  735865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:16:48.778584  735865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:16:48.902752  735865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:16:48.983957  735865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:16:49.007119  735865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:16:49.043616  735865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:16:49.795282  735865 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1014 21:16:49.798263  735865 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650951" to be "Ready" ...
	I1014 21:16:50.029761  735865 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 21:16:50.032690  735865 addons.go:514] duration metric: took 1.364755216s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 21:16:50.300228  735865 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-650951" context rescaled to 1 replicas
	W1014 21:16:50.608954  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:53.108771  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:51.801633  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:16:54.301197  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:16:55.609229  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:57.609327  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:16:56.301810  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:16:58.800934  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:16:59.609397  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:17:01.609975  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:17:00.801661  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:02.801911  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:05.300996  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:03.610195  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:17:06.114382  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:17:07.301081  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:09.301521  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:08.609136  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	W1014 21:17:10.612548  732597 node_ready.go:57] node "default-k8s-diff-port-319221" has "Ready":"False" status (will retry)
	I1014 21:17:11.111762  732597 node_ready.go:49] node "default-k8s-diff-port-319221" is "Ready"
	I1014 21:17:11.111789  732597 node_ready.go:38] duration metric: took 41.006380341s for node "default-k8s-diff-port-319221" to be "Ready" ...
	I1014 21:17:11.111830  732597 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:17:11.111893  732597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:17:11.127756  732597 api_server.go:72] duration metric: took 42.82797172s to wait for apiserver process to appear ...
	I1014 21:17:11.127780  732597 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:17:11.127798  732597 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1014 21:17:11.136339  732597 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1014 21:17:11.144087  732597 api_server.go:141] control plane version: v1.34.1
	I1014 21:17:11.144116  732597 api_server.go:131] duration metric: took 16.329889ms to wait for apiserver health ...
	I1014 21:17:11.144126  732597 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:17:11.149945  732597 system_pods.go:59] 8 kube-system pods found
	I1014 21:17:11.150034  732597 system_pods.go:61] "coredns-66bc5c9577-lm24g" [e78d17f0-be99-4e8f-95aa-a1bff9d1c00c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:17:11.150055  732597 system_pods.go:61] "etcd-default-k8s-diff-port-319221" [8fef365a-dcac-4b2f-abf7-32d977075318] Running
	I1014 21:17:11.150079  732597 system_pods.go:61] "kindnet-5hx2c" [5b7f72f1-7196-4924-bed1-21618e4fa3d0] Running
	I1014 21:17:11.150121  732597 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-319221" [cc381005-1d41-4ead-b319-7bd9b6db50bf] Running
	I1014 21:17:11.150140  732597 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-319221" [b3489f96-23d3-44fc-8b7e-f9cf776b6b76] Running
	I1014 21:17:11.150159  732597 system_pods.go:61] "kube-proxy-gc4zh" [d80ae09b-02db-4989-90e0-1d02a15728e4] Running
	I1014 21:17:11.150195  732597 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-319221" [6c30d149-2d84-4c09-9f90-80f583d0bf73] Running
	I1014 21:17:11.150222  732597 system_pods.go:61] "storage-provisioner" [d9f17d56-0e4c-43d7-b7ca-11ed32ad749c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:17:11.150245  732597 system_pods.go:74] duration metric: took 6.111819ms to wait for pod list to return data ...
	I1014 21:17:11.150283  732597 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:17:11.176096  732597 default_sa.go:45] found service account: "default"
	I1014 21:17:11.176179  732597 default_sa.go:55] duration metric: took 25.872448ms for default service account to be created ...
	I1014 21:17:11.176214  732597 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:17:11.183455  732597 system_pods.go:86] 8 kube-system pods found
	I1014 21:17:11.183536  732597 system_pods.go:89] "coredns-66bc5c9577-lm24g" [e78d17f0-be99-4e8f-95aa-a1bff9d1c00c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:17:11.183560  732597 system_pods.go:89] "etcd-default-k8s-diff-port-319221" [8fef365a-dcac-4b2f-abf7-32d977075318] Running
	I1014 21:17:11.183584  732597 system_pods.go:89] "kindnet-5hx2c" [5b7f72f1-7196-4924-bed1-21618e4fa3d0] Running
	I1014 21:17:11.183622  732597 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319221" [cc381005-1d41-4ead-b319-7bd9b6db50bf] Running
	I1014 21:17:11.183641  732597 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319221" [b3489f96-23d3-44fc-8b7e-f9cf776b6b76] Running
	I1014 21:17:11.183664  732597 system_pods.go:89] "kube-proxy-gc4zh" [d80ae09b-02db-4989-90e0-1d02a15728e4] Running
	I1014 21:17:11.183697  732597 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319221" [6c30d149-2d84-4c09-9f90-80f583d0bf73] Running
	I1014 21:17:11.183722  732597 system_pods.go:89] "storage-provisioner" [d9f17d56-0e4c-43d7-b7ca-11ed32ad749c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:17:11.183756  732597 retry.go:31] will retry after 219.201573ms: missing components: kube-dns
	I1014 21:17:11.407649  732597 system_pods.go:86] 8 kube-system pods found
	I1014 21:17:11.407684  732597 system_pods.go:89] "coredns-66bc5c9577-lm24g" [e78d17f0-be99-4e8f-95aa-a1bff9d1c00c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:17:11.407691  732597 system_pods.go:89] "etcd-default-k8s-diff-port-319221" [8fef365a-dcac-4b2f-abf7-32d977075318] Running
	I1014 21:17:11.407697  732597 system_pods.go:89] "kindnet-5hx2c" [5b7f72f1-7196-4924-bed1-21618e4fa3d0] Running
	I1014 21:17:11.407702  732597 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319221" [cc381005-1d41-4ead-b319-7bd9b6db50bf] Running
	I1014 21:17:11.407706  732597 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319221" [b3489f96-23d3-44fc-8b7e-f9cf776b6b76] Running
	I1014 21:17:11.407711  732597 system_pods.go:89] "kube-proxy-gc4zh" [d80ae09b-02db-4989-90e0-1d02a15728e4] Running
	I1014 21:17:11.407715  732597 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319221" [6c30d149-2d84-4c09-9f90-80f583d0bf73] Running
	I1014 21:17:11.407722  732597 system_pods.go:89] "storage-provisioner" [d9f17d56-0e4c-43d7-b7ca-11ed32ad749c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:17:11.407744  732597 retry.go:31] will retry after 309.322292ms: missing components: kube-dns
	I1014 21:17:11.721642  732597 system_pods.go:86] 8 kube-system pods found
	I1014 21:17:11.721674  732597 system_pods.go:89] "coredns-66bc5c9577-lm24g" [e78d17f0-be99-4e8f-95aa-a1bff9d1c00c] Running
	I1014 21:17:11.721682  732597 system_pods.go:89] "etcd-default-k8s-diff-port-319221" [8fef365a-dcac-4b2f-abf7-32d977075318] Running
	I1014 21:17:11.721688  732597 system_pods.go:89] "kindnet-5hx2c" [5b7f72f1-7196-4924-bed1-21618e4fa3d0] Running
	I1014 21:17:11.721693  732597 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319221" [cc381005-1d41-4ead-b319-7bd9b6db50bf] Running
	I1014 21:17:11.721697  732597 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319221" [b3489f96-23d3-44fc-8b7e-f9cf776b6b76] Running
	I1014 21:17:11.721701  732597 system_pods.go:89] "kube-proxy-gc4zh" [d80ae09b-02db-4989-90e0-1d02a15728e4] Running
	I1014 21:17:11.721705  732597 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319221" [6c30d149-2d84-4c09-9f90-80f583d0bf73] Running
	I1014 21:17:11.721709  732597 system_pods.go:89] "storage-provisioner" [d9f17d56-0e4c-43d7-b7ca-11ed32ad749c] Running
	I1014 21:17:11.721717  732597 system_pods.go:126] duration metric: took 545.464939ms to wait for k8s-apps to be running ...
	I1014 21:17:11.721729  732597 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:17:11.721786  732597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:17:11.734550  732597 system_svc.go:56] duration metric: took 12.812812ms WaitForService to wait for kubelet
	I1014 21:17:11.734629  732597 kubeadm.go:586] duration metric: took 43.434849616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:17:11.734668  732597 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:17:11.737462  732597 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:17:11.737496  732597 node_conditions.go:123] node cpu capacity is 2
	I1014 21:17:11.737508  732597 node_conditions.go:105] duration metric: took 2.82852ms to run NodePressure ...
	I1014 21:17:11.737521  732597 start.go:241] waiting for startup goroutines ...
	I1014 21:17:11.737528  732597 start.go:246] waiting for cluster config update ...
	I1014 21:17:11.737539  732597 start.go:255] writing updated cluster config ...
	I1014 21:17:11.737855  732597 ssh_runner.go:195] Run: rm -f paused
	I1014 21:17:11.741446  732597 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:17:11.745450  732597 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lm24g" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.750284  732597 pod_ready.go:94] pod "coredns-66bc5c9577-lm24g" is "Ready"
	I1014 21:17:11.750320  732597 pod_ready.go:86] duration metric: took 4.841677ms for pod "coredns-66bc5c9577-lm24g" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.752955  732597 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.757537  732597 pod_ready.go:94] pod "etcd-default-k8s-diff-port-319221" is "Ready"
	I1014 21:17:11.757564  732597 pod_ready.go:86] duration metric: took 4.586147ms for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.760000  732597 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.765165  732597 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-319221" is "Ready"
	I1014 21:17:11.765194  732597 pod_ready.go:86] duration metric: took 5.132531ms for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:11.767949  732597 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:12.146678  732597 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-319221" is "Ready"
	I1014 21:17:12.146717  732597 pod_ready.go:86] duration metric: took 378.74145ms for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:12.346106  732597 pod_ready.go:83] waiting for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:12.746461  732597 pod_ready.go:94] pod "kube-proxy-gc4zh" is "Ready"
	I1014 21:17:12.746488  732597 pod_ready.go:86] duration metric: took 400.352817ms for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:12.945523  732597 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:13.346282  732597 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-319221" is "Ready"
	I1014 21:17:13.346319  732597 pod_ready.go:86] duration metric: took 400.768518ms for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:17:13.346330  732597 pod_ready.go:40] duration metric: took 1.604853053s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:17:13.403540  732597 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:17:13.406941  732597 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-319221" cluster and "default" namespace by default
	W1014 21:17:11.800767  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:13.801046  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:15.801466  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	W1014 21:17:18.301851  735865 node_ready.go:57] node "embed-certs-650951" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 14 21:17:11 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:11.067197552Z" level=info msg="Created container 96b606227d1b41ca7f24d10da06ca860c427ba9aa762317abb1726e3f411a4ab: kube-system/coredns-66bc5c9577-lm24g/coredns" id=1272e939-71af-4c05-a41f-9a962c560aeb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:11 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:11.068255061Z" level=info msg="Starting container: 96b606227d1b41ca7f24d10da06ca860c427ba9aa762317abb1726e3f411a4ab" id=1f7508c1-6a8d-474e-8a58-c997c144bd1e name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:17:11 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:11.07337663Z" level=info msg="Started container" PID=1728 containerID=96b606227d1b41ca7f24d10da06ca860c427ba9aa762317abb1726e3f411a4ab description=kube-system/coredns-66bc5c9577-lm24g/coredns id=1f7508c1-6a8d-474e-8a58-c997c144bd1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=62c9adc9c0207f829b4a9c629b0fc6a7d7cc109250334780f683cc401ed5a8fe
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.933341154Z" level=info msg="Running pod sandbox: default/busybox/POD" id=219d9d43-afd5-46b2-87f0-a0e5ab6341e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.933471528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.938809143Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35a7780fe31333d6d305bdc0fbf93ccd8078e2e791eb50f89e64ee4feec8d157 UID:6b1224d1-c597-4749-b584-15d6c3da4e51 NetNS:/var/run/netns/98c57ab2-bd8e-47ff-91a2-879fa1ea6d0f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000ac8b98}] Aliases:map[]}"
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.938844081Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.951434004Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35a7780fe31333d6d305bdc0fbf93ccd8078e2e791eb50f89e64ee4feec8d157 UID:6b1224d1-c597-4749-b584-15d6c3da4e51 NetNS:/var/run/netns/98c57ab2-bd8e-47ff-91a2-879fa1ea6d0f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000ac8b98}] Aliases:map[]}"
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.95159029Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.954414568Z" level=info msg="Ran pod sandbox 35a7780fe31333d6d305bdc0fbf93ccd8078e2e791eb50f89e64ee4feec8d157 with infra container: default/busybox/POD" id=219d9d43-afd5-46b2-87f0-a0e5ab6341e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.955903841Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5dd329e5-10e0-4335-931d-955368073e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.956074765Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5dd329e5-10e0-4335-931d-955368073e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.956139292Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5dd329e5-10e0-4335-931d-955368073e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.957152755Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc21a1e0-def2-43bf-89ca-85db6859a80a name=/runtime.v1.ImageService/PullImage
	Oct 14 21:17:13 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:13.962417777Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.155571807Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=cc21a1e0-def2-43bf-89ca-85db6859a80a name=/runtime.v1.ImageService/PullImage
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.156545433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b6806df4-6ad1-411c-8d1e-4fcfec1217e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.159617413Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a18ea7e3-623c-4fa2-a124-2b25a90f42b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.165014524Z" level=info msg="Creating container: default/busybox/busybox" id=edbd61d7-9064-445a-bc09-28b3e8c3c064 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.165836327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.170373308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.170989748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.190324431Z" level=info msg="Created container ab063fca61fd3581a81181acaeb8e90f750f48c9bfc08db062e95f56022e9f49: default/busybox/busybox" id=edbd61d7-9064-445a-bc09-28b3e8c3c064 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.192153477Z" level=info msg="Starting container: ab063fca61fd3581a81181acaeb8e90f750f48c9bfc08db062e95f56022e9f49" id=573033ad-2434-4bff-aa26-5b8366be86a1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:17:16 default-k8s-diff-port-319221 crio[840]: time="2025-10-14T21:17:16.195603237Z" level=info msg="Started container" PID=1790 containerID=ab063fca61fd3581a81181acaeb8e90f750f48c9bfc08db062e95f56022e9f49 description=default/busybox/busybox id=573033ad-2434-4bff-aa26-5b8366be86a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35a7780fe31333d6d305bdc0fbf93ccd8078e2e791eb50f89e64ee4feec8d157
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ab063fca61fd3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago        Running             busybox                   0                   35a7780fe3133       busybox                                                default
	96b606227d1b4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   62c9adc9c0207       coredns-66bc5c9577-lm24g                               kube-system
	079598d9833b5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   46a11af4da1c5       storage-provisioner                                    kube-system
	0eebac5564a21       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   e999322531938       kindnet-5hx2c                                          kube-system
	1ccbbcfcc8d53       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   4738bc2ecb5eb       kube-proxy-gc4zh                                       kube-system
	1a4e4b1cace4e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   68859a2f9db5f       kube-scheduler-default-k8s-diff-port-319221            kube-system
	0f0bb1ca41c7b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ff5a0de4282cf       kube-controller-manager-default-k8s-diff-port-319221   kube-system
	f126da5e68a30       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   601ee12b31d7c       etcd-default-k8s-diff-port-319221                      kube-system
	1d1d67feac003       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   3076e554ea292       kube-apiserver-default-k8s-diff-port-319221            kube-system
	
	
	==> coredns [96b606227d1b41ca7f24d10da06ca860c427ba9aa762317abb1726e3f411a4ab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46942 - 48999 "HINFO IN 775978133136278631.8802579817603291390. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004892804s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-319221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-319221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=default-k8s-diff-port-319221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-319221
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:17:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:17:10 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:17:10 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:17:10 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:17:10 +0000   Tue, 14 Oct 2025 21:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-319221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 416c33630d8b48f483939bf1b37d2905
	  System UUID:                a87a8a0d-9a85-45a1-87ca-6b235dc2ea96
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-lm24g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-319221                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-5hx2c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-319221             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-319221    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-gc4zh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-319221             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-319221 event: Registered Node default-k8s-diff-port-319221 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-319221 status is now: NodeReady
	
	
	==> dmesg <==
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f126da5e68a303e9f89bbfcb78f8e09d1e211e9f379c2fcbf85219457a768ee0] <==
	{"level":"warn","ts":"2025-10-14T21:16:18.499716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.526827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.579290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.608931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.672860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.684111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.715252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.738310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.765622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.810298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.831673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.851139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.901082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.919577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.940973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:18.976036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.018985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.052923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.073535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.090938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.108222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.133137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.153240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.165845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:19.272816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:17:23 up  3:59,  0 user,  load average: 1.67, 1.80, 1.71
	Linux default-k8s-diff-port-319221 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0eebac5564a21d26e96fb7aa6423407186590bf5acf5787e0b3ec736c46c381d] <==
	I1014 21:16:29.873010       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:16:29.873209       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:16:29.873325       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:16:29.873335       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:16:29.873348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:16:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:16:30.173724       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:16:30.173795       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:16:30.174312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:16:30.175192       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:17:00.174279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:17:00.175393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:17:00.175402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:17:00.175538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1014 21:17:01.174559       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:17:01.174600       1 metrics.go:72] Registering metrics
	I1014 21:17:01.174679       1 controller.go:711] "Syncing nftables rules"
	I1014 21:17:10.181691       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:17:10.181750       1 main.go:301] handling current node
	I1014 21:17:20.174959       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:17:20.175035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d1d67feac00375916e0091c6b6ab9b6b19abb6b09950c581b67cc44d3caee95] <==
	I1014 21:16:20.145615       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:16:20.145784       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1014 21:16:20.193439       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:20.196916       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:16:20.217900       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1014 21:16:20.258189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:20.270319       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:16:20.914443       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 21:16:20.919826       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 21:16:20.919919       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:16:21.727052       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:16:21.784158       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:16:21.937401       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:16:21.938784       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 21:16:21.957635       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1014 21:16:21.958943       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:16:21.978266       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:16:23.059467       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:16:23.095799       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 21:16:23.110994       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:16:26.999204       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:16:27.763212       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:27.793033       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:28.065065       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1014 21:17:21.754126       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:51406: use of closed network connection
	
	
	==> kube-controller-manager [0f0bb1ca41c7b87226a8d24a7d1c09b8c25091497acd38586daf98e0a2a127d2] <==
	I1014 21:16:27.007157       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:16:27.018452       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:16:27.018608       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:16:27.018654       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:16:27.018780       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:16:27.018795       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:16:27.028284       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:16:27.028960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:16:27.029303       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:16:27.029419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:16:27.029480       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:16:27.029554       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:16:27.029605       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:16:27.031816       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:16:27.032961       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:16:27.033046       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:16:27.033120       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:16:27.034659       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:16:27.034687       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:16:27.042784       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:16:27.044535       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:16:27.045593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:16:27.051699       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:16:27.059973       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-319221" podCIDRs=["10.244.0.0/24"]
	I1014 21:17:11.983699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1ccbbcfcc8d5352d8b2a9ca3a4f85be0496d4800fc5f0f8b31a419aba223918d] <==
	I1014 21:16:29.837832       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:16:30.056722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:16:30.175448       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:16:30.175547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:16:30.175659       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:16:30.377804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:16:30.377990       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:16:30.382132       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:16:30.382552       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:16:30.382569       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:16:30.384444       1 config.go:200] "Starting service config controller"
	I1014 21:16:30.384514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:16:30.384566       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:16:30.384608       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:16:30.384646       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:16:30.384673       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:16:30.385499       1 config.go:309] "Starting node config controller"
	I1014 21:16:30.385558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:16:30.385590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:16:30.498718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:16:30.499492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:16:30.499789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1a4e4b1cace4e58e5edeadd4b789705cce765602cba953590c95407297bea8e8] <==
	I1014 21:16:20.495217       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:16:20.495580       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:16:20.495667       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1014 21:16:20.501381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:16:20.501500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:16:20.501573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:16:20.501612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:16:20.501666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:16:20.509714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:16:20.509714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:16:20.509809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:16:20.509858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:16:20.509901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 21:16:20.509936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:16:20.509948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:16:20.509985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:16:20.510019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:16:20.510112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:16:20.510120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:16:20.510233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:16:20.510278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:16:20.510314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:16:21.366368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:16:21.372995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1014 21:16:22.096063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:16:27 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:27.092836    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 21:16:27 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:27.096605    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: E1014 21:16:28.228898    1307 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-319221\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-319221' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: E1014 21:16:28.229019    1307 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-319221\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-319221' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.243357    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzflh\" (UniqueName: \"kubernetes.io/projected/d80ae09b-02db-4989-90e0-1d02a15728e4-kube-api-access-jzflh\") pod \"kube-proxy-gc4zh\" (UID: \"d80ae09b-02db-4989-90e0-1d02a15728e4\") " pod="kube-system/kube-proxy-gc4zh"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.243412    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d80ae09b-02db-4989-90e0-1d02a15728e4-kube-proxy\") pod \"kube-proxy-gc4zh\" (UID: \"d80ae09b-02db-4989-90e0-1d02a15728e4\") " pod="kube-system/kube-proxy-gc4zh"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.243443    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d80ae09b-02db-4989-90e0-1d02a15728e4-xtables-lock\") pod \"kube-proxy-gc4zh\" (UID: \"d80ae09b-02db-4989-90e0-1d02a15728e4\") " pod="kube-system/kube-proxy-gc4zh"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.243461    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d80ae09b-02db-4989-90e0-1d02a15728e4-lib-modules\") pod \"kube-proxy-gc4zh\" (UID: \"d80ae09b-02db-4989-90e0-1d02a15728e4\") " pod="kube-system/kube-proxy-gc4zh"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.344493    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b7f72f1-7196-4924-bed1-21618e4fa3d0-cni-cfg\") pod \"kindnet-5hx2c\" (UID: \"5b7f72f1-7196-4924-bed1-21618e4fa3d0\") " pod="kube-system/kindnet-5hx2c"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.344547    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b7f72f1-7196-4924-bed1-21618e4fa3d0-xtables-lock\") pod \"kindnet-5hx2c\" (UID: \"5b7f72f1-7196-4924-bed1-21618e4fa3d0\") " pod="kube-system/kindnet-5hx2c"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.344569    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxg6\" (UniqueName: \"kubernetes.io/projected/5b7f72f1-7196-4924-bed1-21618e4fa3d0-kube-api-access-lwxg6\") pod \"kindnet-5hx2c\" (UID: \"5b7f72f1-7196-4924-bed1-21618e4fa3d0\") " pod="kube-system/kindnet-5hx2c"
	Oct 14 21:16:28 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:28.344591    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b7f72f1-7196-4924-bed1-21618e4fa3d0-lib-modules\") pod \"kindnet-5hx2c\" (UID: \"5b7f72f1-7196-4924-bed1-21618e4fa3d0\") " pod="kube-system/kindnet-5hx2c"
	Oct 14 21:16:29 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:29.191252    1307 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:16:30 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:30.455265    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5hx2c" podStartSLOduration=2.455248075 podStartE2EDuration="2.455248075s" podCreationTimestamp="2025-10-14 21:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:30.430935945 +0000 UTC m=+7.453457659" watchObservedRunningTime="2025-10-14 21:16:30.455248075 +0000 UTC m=+7.477769780"
	Oct 14 21:16:30 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:16:30.455546    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gc4zh" podStartSLOduration=2.455539106 podStartE2EDuration="2.455539106s" podCreationTimestamp="2025-10-14 21:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:30.455048088 +0000 UTC m=+7.477569794" watchObservedRunningTime="2025-10-14 21:16:30.455539106 +0000 UTC m=+7.478060812"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:10.628539    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:10.779227    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78d17f0-be99-4e8f-95aa-a1bff9d1c00c-config-volume\") pod \"coredns-66bc5c9577-lm24g\" (UID: \"e78d17f0-be99-4e8f-95aa-a1bff9d1c00c\") " pod="kube-system/coredns-66bc5c9577-lm24g"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:10.779320    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9gt\" (UniqueName: \"kubernetes.io/projected/e78d17f0-be99-4e8f-95aa-a1bff9d1c00c-kube-api-access-xc9gt\") pod \"coredns-66bc5c9577-lm24g\" (UID: \"e78d17f0-be99-4e8f-95aa-a1bff9d1c00c\") " pod="kube-system/coredns-66bc5c9577-lm24g"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:10.779345    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwsj\" (UniqueName: \"kubernetes.io/projected/d9f17d56-0e4c-43d7-b7ca-11ed32ad749c-kube-api-access-4kwsj\") pod \"storage-provisioner\" (UID: \"d9f17d56-0e4c-43d7-b7ca-11ed32ad749c\") " pod="kube-system/storage-provisioner"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:10.779366    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9f17d56-0e4c-43d7-b7ca-11ed32ad749c-tmp\") pod \"storage-provisioner\" (UID: \"d9f17d56-0e4c-43d7-b7ca-11ed32ad749c\") " pod="kube-system/storage-provisioner"
	Oct 14 21:17:10 default-k8s-diff-port-319221 kubelet[1307]: W1014 21:17:10.998636    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-46a11af4da1c50ef848691fb039a7dbafec7bdd83cefa6035e52bb92f991958d WatchSource:0}: Error finding container 46a11af4da1c50ef848691fb039a7dbafec7bdd83cefa6035e52bb92f991958d: Status 404 returned error can't find the container with id 46a11af4da1c50ef848691fb039a7dbafec7bdd83cefa6035e52bb92f991958d
	Oct 14 21:17:11 default-k8s-diff-port-319221 kubelet[1307]: W1014 21:17:11.014834    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-62c9adc9c0207f829b4a9c629b0fc6a7d7cc109250334780f683cc401ed5a8fe WatchSource:0}: Error finding container 62c9adc9c0207f829b4a9c629b0fc6a7d7cc109250334780f683cc401ed5a8fe: Status 404 returned error can't find the container with id 62c9adc9c0207f829b4a9c629b0fc6a7d7cc109250334780f683cc401ed5a8fe
	Oct 14 21:17:11 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:11.541796    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.541779032 podStartE2EDuration="41.541779032s" podCreationTimestamp="2025-10-14 21:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:17:11.52849768 +0000 UTC m=+48.551019386" watchObservedRunningTime="2025-10-14 21:17:11.541779032 +0000 UTC m=+48.564300747"
	Oct 14 21:17:13 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:13.623329    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lm24g" podStartSLOduration=46.623308243 podStartE2EDuration="46.623308243s" podCreationTimestamp="2025-10-14 21:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:17:11.542261989 +0000 UTC m=+48.564783711" watchObservedRunningTime="2025-10-14 21:17:13.623308243 +0000 UTC m=+50.645829949"
	Oct 14 21:17:13 default-k8s-diff-port-319221 kubelet[1307]: I1014 21:17:13.701996    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcblh\" (UniqueName: \"kubernetes.io/projected/6b1224d1-c597-4749-b584-15d6c3da4e51-kube-api-access-dcblh\") pod \"busybox\" (UID: \"6b1224d1-c597-4749-b584-15d6c3da4e51\") " pod="default/busybox"
	
	
	==> storage-provisioner [079598d9833b5f63eb6d7de4297aefca6f4fd86ce47b16b1fd056b8057b587f0] <==
	I1014 21:17:11.105808       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:17:11.177826       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:17:11.178409       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:17:11.189553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:11.201228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:17:11.201438       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:17:11.201622       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1f5ad756-fce5-47ff-a1a0-05dc0b5f24e4!
	I1014 21:17:11.202609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32eebcf8-6bdd-4075-bafa-c8aa6a4d9e02", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-319221_1f5ad756-fce5-47ff-a1a0-05dc0b5f24e4 became leader
	W1014 21:17:11.208035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:11.212152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:17:11.302604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1f5ad756-fce5-47ff-a1a0-05dc0b5f24e4!
	W1014 21:17:13.215767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:13.220840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:15.223350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:15.228110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:17.231406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:17.238544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:19.242317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:19.248199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:21.252032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:21.256638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:23.261234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:23.268620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (320.846983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:17:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-650951 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-650951 describe deploy/metrics-server -n kube-system: exit status 1 (121.499581ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-650951 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-650951
helpers_test.go:243: (dbg) docker inspect embed-certs-650951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	        "Created": "2025-10-14T21:16:16.666518706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 736397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:16:16.723847769Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749-json.log",
	        "Name": "/embed-certs-650951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-650951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-650951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	                "LowerDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-650951",
	                "Source": "/var/lib/docker/volumes/embed-certs-650951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-650951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-650951",
	                "name.minikube.sigs.k8s.io": "embed-certs-650951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84e38c5c1dbce359c4b31dc4673d6e31b02936c245fe7e91f7a03a3a7a18f8a4",
	            "SandboxKey": "/var/run/docker/netns/84e38c5c1dbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-650951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:a9:ca:7c:b9:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dec8f1ebdf6a87cb6a25803785fed11070bcba6788531da683aec0eefde26c55",
	                    "EndpointID": "115ade6faeb31faca4f79405959f3b172fecce7976d68ebde047c3c609ede0ac",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-650951",
	                        "b2e56cd2dd30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25: (1.521517554s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-162846                                                                                                                                                                                                                  │ force-systemd-flag-162846    │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:11 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:11 UTC │ 14 Oct 25 21:12 UTC │
	│ delete  │ -p force-systemd-env-347955                                                                                                                                                                                                                   │ force-systemd-env-347955     │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:12 UTC │
	│ start   │ -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:12 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ cert-options-967887 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:17:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:17:36.417617  739852 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:17:36.417870  739852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:36.417883  739852 out.go:374] Setting ErrFile to fd 2...
	I1014 21:17:36.417889  739852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:36.418245  739852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:17:36.418740  739852 out.go:368] Setting JSON to false
	I1014 21:17:36.419857  739852 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14407,"bootTime":1760462250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:17:36.419942  739852 start.go:141] virtualization:  
	I1014 21:17:36.423208  739852 out.go:179] * [default-k8s-diff-port-319221] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:17:36.427008  739852 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:17:36.427104  739852 notify.go:220] Checking for updates...
	I1014 21:17:36.433534  739852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:17:36.436554  739852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:17:36.439656  739852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:17:36.442669  739852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:17:36.445636  739852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:17:36.449091  739852 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:17:36.449822  739852 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:17:36.483708  739852 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:17:36.483830  739852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:36.554299  739852 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:17:36.544911139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:36.554419  739852 docker.go:318] overlay module found
	I1014 21:17:36.557533  739852 out.go:179] * Using the docker driver based on existing profile
	I1014 21:17:36.560368  739852 start.go:305] selected driver: docker
	I1014 21:17:36.560387  739852 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:36.560501  739852 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:17:36.561221  739852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:36.621142  739852 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:17:36.610143648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:36.621496  739852 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:17:36.621535  739852 cni.go:84] Creating CNI manager for ""
	I1014 21:17:36.621591  739852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:17:36.621638  739852 start.go:349] cluster config:
	{Name:default-k8s-diff-port-319221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-319221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:36.626820  739852 out.go:179] * Starting "default-k8s-diff-port-319221" primary control-plane node in "default-k8s-diff-port-319221" cluster
	I1014 21:17:36.629732  739852 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:17:36.632641  739852 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:17:36.635333  739852 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:17:36.635389  739852 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:17:36.635405  739852 cache.go:58] Caching tarball of preloaded images
	I1014 21:17:36.635428  739852 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:17:36.635494  739852 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:17:36.635505  739852 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:17:36.635624  739852 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:17:36.655041  739852 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:17:36.655065  739852 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:17:36.655091  739852 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:17:36.655115  739852 start.go:360] acquireMachinesLock for default-k8s-diff-port-319221: {Name:mkd932431e6f0afcc7b0e6021b1702e8970f2634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:17:36.655178  739852 start.go:364] duration metric: took 38.376µs to acquireMachinesLock for "default-k8s-diff-port-319221"
	I1014 21:17:36.655202  739852 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:17:36.655218  739852 fix.go:54] fixHost starting: 
	I1014 21:17:36.655481  739852 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:17:36.672349  739852 fix.go:112] recreateIfNeeded on default-k8s-diff-port-319221: state=Stopped err=<nil>
	W1014 21:17:36.672378  739852 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:17:36.675583  739852 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-319221" ...
	I1014 21:17:36.675673  739852 cli_runner.go:164] Run: docker start default-k8s-diff-port-319221
	I1014 21:17:36.916668  739852 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:17:36.939847  739852 kic.go:430] container "default-k8s-diff-port-319221" state is running.
	I1014 21:17:36.940231  739852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:17:36.966858  739852 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/config.json ...
	I1014 21:17:36.967126  739852 machine.go:93] provisionDockerMachine start ...
	I1014 21:17:36.967203  739852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:17:36.988102  739852 main.go:141] libmachine: Using SSH client type: native
	I1014 21:17:36.988474  739852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1014 21:17:36.988490  739852 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:17:36.989031  739852 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33802: read: connection reset by peer
	I1014 21:17:40.156063  739852 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:17:40.156096  739852 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-319221"
	I1014 21:17:40.156198  739852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:17:40.174902  739852 main.go:141] libmachine: Using SSH client type: native
	I1014 21:17:40.175248  739852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1014 21:17:40.175267  739852 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-319221 && echo "default-k8s-diff-port-319221" | sudo tee /etc/hostname
	I1014 21:17:40.330405  739852 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-319221
	
	I1014 21:17:40.330482  739852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:17:40.349839  739852 main.go:141] libmachine: Using SSH client type: native
	I1014 21:17:40.350154  739852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1014 21:17:40.350179  739852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-319221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-319221/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-319221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:17:40.496653  739852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:17:40.496682  739852 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:17:40.496701  739852 ubuntu.go:190] setting up certificates
	I1014 21:17:40.496710  739852 provision.go:84] configureAuth start
	I1014 21:17:40.496834  739852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-319221
	I1014 21:17:40.515406  739852 provision.go:143] copyHostCerts
	I1014 21:17:40.515484  739852 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:17:40.515506  739852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:17:40.515588  739852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:17:40.515703  739852 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:17:40.515715  739852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:17:40.515744  739852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:17:40.515809  739852 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:17:40.515819  739852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:17:40.515845  739852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:17:40.515907  739852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-319221 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-319221 localhost minikube]
	I1014 21:17:41.300038  739852 provision.go:177] copyRemoteCerts
	I1014 21:17:41.300121  739852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:17:41.300165  739852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:17:41.318985  739852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 14 21:17:31 embed-certs-650951 crio[838]: time="2025-10-14T21:17:31.031606805Z" level=info msg="Created container ea8a3368dd399c601d1663edc14ea2ca5a5477704a08ce1520fcd8bcaf20bc7d: kube-system/coredns-66bc5c9577-sllzf/coredns" id=79122826-0961-4700-80d7-ea35cd2242b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:31 embed-certs-650951 crio[838]: time="2025-10-14T21:17:31.032528951Z" level=info msg="Starting container: ea8a3368dd399c601d1663edc14ea2ca5a5477704a08ce1520fcd8bcaf20bc7d" id=0eb76df4-80b7-426b-ba70-b1b461af05c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:17:31 embed-certs-650951 crio[838]: time="2025-10-14T21:17:31.03420089Z" level=info msg="Started container" PID=1721 containerID=ea8a3368dd399c601d1663edc14ea2ca5a5477704a08ce1520fcd8bcaf20bc7d description=kube-system/coredns-66bc5c9577-sllzf/coredns id=0eb76df4-80b7-426b-ba70-b1b461af05c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=36ad1a09d3a773e4a0bd8be1e16388c9865623b3a6914424975d34ca6951ffa7
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.894315991Z" level=info msg="Running pod sandbox: default/busybox/POD" id=35b90ebf-f941-4530-9294-50969fa841cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.894394901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.90329197Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35 UID:92172836-5a93-4799-bd2f-e485cdace7f5 NetNS:/var/run/netns/8d05f7d1-6cac-4828-8708-36939f86c0b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c958}] Aliases:map[]}"
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.903330855Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.912242235Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35 UID:92172836-5a93-4799-bd2f-e485cdace7f5 NetNS:/var/run/netns/8d05f7d1-6cac-4828-8708-36939f86c0b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c958}] Aliases:map[]}"
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.912714483Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.916781422Z" level=info msg="Ran pod sandbox 835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35 with infra container: default/busybox/POD" id=35b90ebf-f941-4530-9294-50969fa841cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.91786463Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e55d0314-fbb1-4567-8e3a-cbde312f99cb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.918015829Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e55d0314-fbb1-4567-8e3a-cbde312f99cb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.91807668Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e55d0314-fbb1-4567-8e3a-cbde312f99cb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.919054572Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef3892bb-8f63-4e66-bdc1-ea80e92f3162 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:17:33 embed-certs-650951 crio[838]: time="2025-10-14T21:17:33.920891446Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.067906446Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ef3892bb-8f63-4e66-bdc1-ea80e92f3162 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.068991098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=09dd65a3-011f-4c11-87ef-173ab85e6689 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.073029917Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74179b2f-c1fd-4c4d-a1a3-5ad53007a7fa name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.082080376Z" level=info msg="Creating container: default/busybox/busybox" id=c56a7595-dbba-4be6-9170-acffac9374a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.082973779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.088446238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.089072516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.115176855Z" level=info msg="Created container 9d765a0994522da8a674c61dbc3da16d2ce20f14caf86fe3e33f1a0de047e0ac: default/busybox/busybox" id=c56a7595-dbba-4be6-9170-acffac9374a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.116529632Z" level=info msg="Starting container: 9d765a0994522da8a674c61dbc3da16d2ce20f14caf86fe3e33f1a0de047e0ac" id=71b4e4b1-63f1-472b-9641-93805b8851ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:17:36 embed-certs-650951 crio[838]: time="2025-10-14T21:17:36.119835905Z" level=info msg="Started container" PID=1775 containerID=9d765a0994522da8a674c61dbc3da16d2ce20f14caf86fe3e33f1a0de047e0ac description=default/busybox/busybox id=71b4e4b1-63f1-472b-9641-93805b8851ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	9d765a0994522       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   835c932819717       busybox                                      default
	ea8a3368dd399       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   36ad1a09d3a77       coredns-66bc5c9577-sllzf                     kube-system
	3e18cedc00977       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   9bd5395d4c1cb       storage-provisioner                          kube-system
	9528a175df4eb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   7dfd5934089ba       kindnet-x2jcr                                kube-system
	91caeecfaa0d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   b03f63a59b459       kube-proxy-2xsjs                             kube-system
	232058be98bf4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   2dda9fe654e0f       kube-controller-manager-embed-certs-650951   kube-system
	bb8221478261f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   7ec09489e7cb4       kube-apiserver-embed-certs-650951            kube-system
	60561571ebcdb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   db16e695a6174       kube-scheduler-embed-certs-650951            kube-system
	713973763a0c8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   9260cd51f7471       etcd-embed-certs-650951                      kube-system
	
	
	==> coredns [ea8a3368dd399c601d1663edc14ea2ca5a5477704a08ce1520fcd8bcaf20bc7d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48569 - 21970 "HINFO IN 7564559377368509803.1364242889625854561. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010984559s
	
	
	==> describe nodes <==
	Name:               embed-certs-650951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-650951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=embed-certs-650951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-650951
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:17:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:17:30 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:17:30 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:17:30 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:17:30 +0000   Tue, 14 Oct 2025 21:17:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-650951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 6099c22fdcea4f5fb84720667ff15349
	  System UUID:                27698974-6461-4c41-a998-671addfa4ff7
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-sllzf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-embed-certs-650951                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-x2jcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-650951             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-650951    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-2xsjs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-650951             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-650951 event: Registered Node embed-certs-650951 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-650951 status is now: NodeReady
	
	
	==> dmesg <==
	[ +36.710003] overlayfs: idmapped layers are currently not supported
	[Oct14 20:45] overlayfs: idmapped layers are currently not supported
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [713973763a0c82ba2bdf759b5b73357f3178226e28fd53c2d2665efce48c68e9] <==
	{"level":"warn","ts":"2025-10-14T21:16:39.964728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:39.982008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.004270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.025126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.043448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.067641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.089092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.111384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.135163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.159113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.172475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.194802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.215097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.241057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.252191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.271087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.290147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.305579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.332440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.348569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.361779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.396026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.424412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.437661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:16:40.542668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52578","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:17:43 up  4:00,  0 user,  load average: 2.46, 1.96, 1.76
	Linux embed-certs-650951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9528a175df4eb41304a219a64ddc122c8b0f5ee61407e7bc1c3ac02ae235dfa0] <==
	I1014 21:16:49.977626       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:16:49.980675       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:16:49.980895       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:16:49.981740       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:16:49.981813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:16:50.183976       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:16:50.184008       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:16:50.184018       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:16:50.184120       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:17:20.184994       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:17:20.184996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:17:20.185119       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:17:20.185261       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1014 21:17:21.584814       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:17:21.584848       1 metrics.go:72] Registering metrics
	I1014 21:17:21.584915       1 controller.go:711] "Syncing nftables rules"
	I1014 21:17:30.183788       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:17:30.183852       1 main.go:301] handling current node
	I1014 21:17:40.186207       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:17:40.186420       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb8221478261fd0c4938cc39d27803373e9a228adc2934017f9f6eac52d53edf] <==
	I1014 21:16:41.427117       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:16:41.433735       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:41.434638       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1014 21:16:41.454478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:16:41.457990       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:41.458144       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:16:41.460619       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:16:42.134868       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 21:16:42.144106       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 21:16:42.144135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:16:43.043961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:16:43.104788       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:16:43.233495       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 21:16:43.241533       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1014 21:16:43.242920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:16:43.248121       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:16:43.261527       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:16:44.331261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:16:44.348806       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 21:16:44.358194       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:16:48.368897       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:48.373964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:16:49.181122       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1014 21:16:49.477820       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1014 21:17:41.723440       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:34284: use of closed network connection
	
	
	==> kube-controller-manager [232058be98bf477558253edb82f34935bf34bba283594d45b8af1639ffed336c] <==
	I1014 21:16:48.260801       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:16:48.260905       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:16:48.262184       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:16:48.264924       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:16:48.265031       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:16:48.265038       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:16:48.266217       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:16:48.267478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:16:48.271869       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:16:48.274246       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:16:48.275684       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 21:16:48.287246       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:16:48.292451       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:16:48.308451       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:16:48.308559       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:16:48.308592       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 21:16:48.311120       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:16:48.311557       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:16:48.312048       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:16:48.312072       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:16:48.312664       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:16:48.317892       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 21:16:48.317996       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:16:48.327219       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:17:33.263396       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [91caeecfaa0d2977f768dbcd979616660a51be728c0a5d91fff341da770498a7] <==
	I1014 21:16:49.924577       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:16:50.036893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:16:50.137648       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:16:50.137688       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:16:50.137755       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:16:50.158114       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:16:50.158180       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:16:50.162643       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:16:50.162992       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:16:50.163017       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:16:50.165057       1 config.go:200] "Starting service config controller"
	I1014 21:16:50.165081       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:16:50.165100       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:16:50.165105       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:16:50.165129       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:16:50.165133       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:16:50.165820       1 config.go:309] "Starting node config controller"
	I1014 21:16:50.165842       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:16:50.165850       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:16:50.265527       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:16:50.265531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:16:50.265579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [60561571ebcdbec244ed627660e50c3d13a1a318463382b06d38c2c4f774c73b] <==
	E1014 21:16:41.357251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:16:41.357313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:16:41.360134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:16:41.360213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:16:41.360280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:16:41.360939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:16:41.361023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:16:41.361077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:16:41.361125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:16:41.361188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:16:41.361292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:16:41.361391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:16:41.370171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:16:42.267288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:16:42.299573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:16:42.317830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:16:42.372033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:16:42.457231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:16:42.549442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:16:42.549706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:16:42.594118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:16:42.612712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:16:42.635828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:16:42.680720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1014 21:16:45.319324       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:16:45 embed-certs-650951 kubelet[1301]: I1014 21:16:45.499248    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-650951" podStartSLOduration=1.499242134 podStartE2EDuration="1.499242134s" podCreationTimestamp="2025-10-14 21:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:45.472789855 +0000 UTC m=+1.288360375" watchObservedRunningTime="2025-10-14 21:16:45.499242134 +0000 UTC m=+1.314812670"
	Oct 14 21:16:45 embed-certs-650951 kubelet[1301]: I1014 21:16:45.568588    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-650951" podStartSLOduration=1.568569323 podStartE2EDuration="1.568569323s" podCreationTimestamp="2025-10-14 21:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:45.54805776 +0000 UTC m=+1.363628272" watchObservedRunningTime="2025-10-14 21:16:45.568569323 +0000 UTC m=+1.384139835"
	Oct 14 21:16:48 embed-certs-650951 kubelet[1301]: I1014 21:16:48.273449    1301 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 21:16:48 embed-certs-650951 kubelet[1301]: I1014 21:16:48.274106    1301 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478034    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbf4863-3dcb-41d8-9cb8-ab35659fc408-xtables-lock\") pod \"kube-proxy-2xsjs\" (UID: \"0bbf4863-3dcb-41d8-9cb8-ab35659fc408\") " pod="kube-system/kube-proxy-2xsjs"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478077    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbf4863-3dcb-41d8-9cb8-ab35659fc408-lib-modules\") pod \"kube-proxy-2xsjs\" (UID: \"0bbf4863-3dcb-41d8-9cb8-ab35659fc408\") " pod="kube-system/kube-proxy-2xsjs"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478114    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a39835a1-f0d7-4387-9c8d-b91532773cfb-cni-cfg\") pod \"kindnet-x2jcr\" (UID: \"a39835a1-f0d7-4387-9c8d-b91532773cfb\") " pod="kube-system/kindnet-x2jcr"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478135    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a39835a1-f0d7-4387-9c8d-b91532773cfb-lib-modules\") pod \"kindnet-x2jcr\" (UID: \"a39835a1-f0d7-4387-9c8d-b91532773cfb\") " pod="kube-system/kindnet-x2jcr"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478156    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bbf4863-3dcb-41d8-9cb8-ab35659fc408-kube-proxy\") pod \"kube-proxy-2xsjs\" (UID: \"0bbf4863-3dcb-41d8-9cb8-ab35659fc408\") " pod="kube-system/kube-proxy-2xsjs"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478184    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcxm\" (UniqueName: \"kubernetes.io/projected/0bbf4863-3dcb-41d8-9cb8-ab35659fc408-kube-api-access-qvcxm\") pod \"kube-proxy-2xsjs\" (UID: \"0bbf4863-3dcb-41d8-9cb8-ab35659fc408\") " pod="kube-system/kube-proxy-2xsjs"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478210    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a39835a1-f0d7-4387-9c8d-b91532773cfb-xtables-lock\") pod \"kindnet-x2jcr\" (UID: \"a39835a1-f0d7-4387-9c8d-b91532773cfb\") " pod="kube-system/kindnet-x2jcr"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.478238    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5sq\" (UniqueName: \"kubernetes.io/projected/a39835a1-f0d7-4387-9c8d-b91532773cfb-kube-api-access-2l5sq\") pod \"kindnet-x2jcr\" (UID: \"a39835a1-f0d7-4387-9c8d-b91532773cfb\") " pod="kube-system/kindnet-x2jcr"
	Oct 14 21:16:49 embed-certs-650951 kubelet[1301]: I1014 21:16:49.652509    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:16:50 embed-certs-650951 kubelet[1301]: I1014 21:16:50.440887    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x2jcr" podStartSLOduration=1.440866995 podStartE2EDuration="1.440866995s" podCreationTimestamp="2025-10-14 21:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:50.424239584 +0000 UTC m=+6.239810104" watchObservedRunningTime="2025-10-14 21:16:50.440866995 +0000 UTC m=+6.256437507"
	Oct 14 21:16:54 embed-certs-650951 kubelet[1301]: I1014 21:16:54.549737    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xsjs" podStartSLOduration=5.549696283 podStartE2EDuration="5.549696283s" podCreationTimestamp="2025-10-14 21:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:16:50.441467919 +0000 UTC m=+6.257038439" watchObservedRunningTime="2025-10-14 21:16:54.549696283 +0000 UTC m=+10.365266795"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: I1014 21:17:30.595292    1301 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: I1014 21:17:30.789928    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da644f72-95d9-4486-ae40-223ff853f318-tmp\") pod \"storage-provisioner\" (UID: \"da644f72-95d9-4486-ae40-223ff853f318\") " pod="kube-system/storage-provisioner"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: I1014 21:17:30.789981    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcp9k\" (UniqueName: \"kubernetes.io/projected/da644f72-95d9-4486-ae40-223ff853f318-kube-api-access-dcp9k\") pod \"storage-provisioner\" (UID: \"da644f72-95d9-4486-ae40-223ff853f318\") " pod="kube-system/storage-provisioner"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: I1014 21:17:30.790013    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29df3259-1ac0-4fa0-9cd6-67dd67036cd1-config-volume\") pod \"coredns-66bc5c9577-sllzf\" (UID: \"29df3259-1ac0-4fa0-9cd6-67dd67036cd1\") " pod="kube-system/coredns-66bc5c9577-sllzf"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: I1014 21:17:30.790032    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56gdl\" (UniqueName: \"kubernetes.io/projected/29df3259-1ac0-4fa0-9cd6-67dd67036cd1-kube-api-access-56gdl\") pod \"coredns-66bc5c9577-sllzf\" (UID: \"29df3259-1ac0-4fa0-9cd6-67dd67036cd1\") " pod="kube-system/coredns-66bc5c9577-sllzf"
	Oct 14 21:17:30 embed-certs-650951 kubelet[1301]: W1014 21:17:30.973888    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-36ad1a09d3a773e4a0bd8be1e16388c9865623b3a6914424975d34ca6951ffa7 WatchSource:0}: Error finding container 36ad1a09d3a773e4a0bd8be1e16388c9865623b3a6914424975d34ca6951ffa7: Status 404 returned error can't find the container with id 36ad1a09d3a773e4a0bd8be1e16388c9865623b3a6914424975d34ca6951ffa7
	Oct 14 21:17:31 embed-certs-650951 kubelet[1301]: I1014 21:17:31.537493    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sllzf" podStartSLOduration=42.537473339 podStartE2EDuration="42.537473339s" podCreationTimestamp="2025-10-14 21:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:17:31.521892008 +0000 UTC m=+47.337462536" watchObservedRunningTime="2025-10-14 21:17:31.537473339 +0000 UTC m=+47.353043859"
	Oct 14 21:17:31 embed-certs-650951 kubelet[1301]: I1014 21:17:31.537852    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.537843948 podStartE2EDuration="41.537843948s" podCreationTimestamp="2025-10-14 21:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:17:31.537064386 +0000 UTC m=+47.352634906" watchObservedRunningTime="2025-10-14 21:17:31.537843948 +0000 UTC m=+47.353414468"
	Oct 14 21:17:33 embed-certs-650951 kubelet[1301]: I1014 21:17:33.711036    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvlq2\" (UniqueName: \"kubernetes.io/projected/92172836-5a93-4799-bd2f-e485cdace7f5-kube-api-access-gvlq2\") pod \"busybox\" (UID: \"92172836-5a93-4799-bd2f-e485cdace7f5\") " pod="default/busybox"
	Oct 14 21:17:33 embed-certs-650951 kubelet[1301]: W1014 21:17:33.914633    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35 WatchSource:0}: Error finding container 835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35: Status 404 returned error can't find the container with id 835c9328197170eb0448c671ce8a2a2acae020d687c2cc3d6fea705348a1aa35
	
	
	==> storage-provisioner [3e18cedc0097762c6e6add58958d8570cf9ab00aa5239cff0eee3e4b19507b70] <==
	I1014 21:17:31.009545       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:17:31.024351       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:17:31.024402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:17:31.028761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:31.048526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:17:31.048752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:17:31.051374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_00c573ad-55fa-4ca0-893f-8f9eb32e1f2b!
	I1014 21:17:31.053211       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e765fc2b-374f-4cc2-922a-737fb3491cd4", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-650951_00c573ad-55fa-4ca0-893f-8f9eb32e1f2b became leader
	W1014 21:17:31.071801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:31.108957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:17:31.152032       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_00c573ad-55fa-4ca0-893f-8f9eb32e1f2b!
	W1014 21:17:33.111923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:33.118951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:35.122372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:35.126855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:37.130732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:37.141940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:39.146107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:39.150359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:41.154424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:41.162312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:43.169675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:17:43.174465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-650951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-319221 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-319221 --alsologtostderr -v=1: exit status 80 (1.928718149s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-319221 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:18:44.405349  744915 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:18:44.405533  744915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:44.405547  744915 out.go:374] Setting ErrFile to fd 2...
	I1014 21:18:44.405553  744915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:44.405879  744915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:18:44.406219  744915 out.go:368] Setting JSON to false
	I1014 21:18:44.406264  744915 mustload.go:65] Loading cluster: default-k8s-diff-port-319221
	I1014 21:18:44.406671  744915 config.go:182] Loaded profile config "default-k8s-diff-port-319221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:44.407351  744915 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-319221 --format={{.State.Status}}
	I1014 21:18:44.427410  744915 host.go:66] Checking if "default-k8s-diff-port-319221" exists ...
	I1014 21:18:44.427732  744915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:18:44.507120  744915 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:18:44.489830137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:18:44.508224  744915 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-319221 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:18:44.515338  744915 out.go:179] * Pausing node default-k8s-diff-port-319221 ... 
	I1014 21:18:44.518254  744915 host.go:66] Checking if "default-k8s-diff-port-319221" exists ...
	I1014 21:18:44.518726  744915 ssh_runner.go:195] Run: systemctl --version
	I1014 21:18:44.518845  744915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-319221
	I1014 21:18:44.540769  744915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/default-k8s-diff-port-319221/id_rsa Username:docker}
	I1014 21:18:44.646901  744915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:18:44.665874  744915 pause.go:52] kubelet running: true
	I1014 21:18:44.665946  744915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:18:44.983717  744915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:18:44.983825  744915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:18:45.146709  744915 cri.go:89] found id: "f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d"
	I1014 21:18:45.146731  744915 cri.go:89] found id: "a5d1335313edfa9167baa641a267d1bf28579a1fd2d75e288836aca3279714f6"
	I1014 21:18:45.146737  744915 cri.go:89] found id: "f928c3872da21104d13ae2b485be1678e00a6f2bd1882dce4335b5c5bc58ef20"
	I1014 21:18:45.146741  744915 cri.go:89] found id: "06faac1a4580df264944b69244b55b6cef6ece45b1cdd6bf0da2440fcb694194"
	I1014 21:18:45.146744  744915 cri.go:89] found id: "2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c"
	I1014 21:18:45.146748  744915 cri.go:89] found id: "c76dda25418267dfa4c7d4626489bfaeb9c07ffefc2274e1731308f9b5899ab2"
	I1014 21:18:45.146755  744915 cri.go:89] found id: "0b6a360756e087c2443b2b0e2d881459ae06af4340a783adc594f403a6a0e4a7"
	I1014 21:18:45.146758  744915 cri.go:89] found id: "7b9318db856fe0d0f6a7117085a0adce0db784142ed7b06b332460753d08e8ae"
	I1014 21:18:45.146762  744915 cri.go:89] found id: "21c6e7b7c3b753bafef79064f5b38d7a56991049b34f4cd0684b2a1c8ffdd1c6"
	I1014 21:18:45.146772  744915 cri.go:89] found id: "f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	I1014 21:18:45.146776  744915 cri.go:89] found id: "b7fe5f8da58a66adccb01bb7fd806b32a9fe61f3e4fa8aa6b71649dc014f1c0a"
	I1014 21:18:45.146779  744915 cri.go:89] found id: ""
	I1014 21:18:45.146838  744915 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:18:45.162143  744915 retry.go:31] will retry after 153.403529ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:18:45.316771  744915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:18:45.336959  744915 pause.go:52] kubelet running: false
	I1014 21:18:45.337045  744915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:18:45.520172  744915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:18:45.520347  744915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:18:45.591765  744915 cri.go:89] found id: "f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d"
	I1014 21:18:45.591842  744915 cri.go:89] found id: "a5d1335313edfa9167baa641a267d1bf28579a1fd2d75e288836aca3279714f6"
	I1014 21:18:45.591862  744915 cri.go:89] found id: "f928c3872da21104d13ae2b485be1678e00a6f2bd1882dce4335b5c5bc58ef20"
	I1014 21:18:45.591883  744915 cri.go:89] found id: "06faac1a4580df264944b69244b55b6cef6ece45b1cdd6bf0da2440fcb694194"
	I1014 21:18:45.591910  744915 cri.go:89] found id: "2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c"
	I1014 21:18:45.591936  744915 cri.go:89] found id: "c76dda25418267dfa4c7d4626489bfaeb9c07ffefc2274e1731308f9b5899ab2"
	I1014 21:18:45.591954  744915 cri.go:89] found id: "0b6a360756e087c2443b2b0e2d881459ae06af4340a783adc594f403a6a0e4a7"
	I1014 21:18:45.591972  744915 cri.go:89] found id: "7b9318db856fe0d0f6a7117085a0adce0db784142ed7b06b332460753d08e8ae"
	I1014 21:18:45.591991  744915 cri.go:89] found id: "21c6e7b7c3b753bafef79064f5b38d7a56991049b34f4cd0684b2a1c8ffdd1c6"
	I1014 21:18:45.592015  744915 cri.go:89] found id: "f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	I1014 21:18:45.592033  744915 cri.go:89] found id: "b7fe5f8da58a66adccb01bb7fd806b32a9fe61f3e4fa8aa6b71649dc014f1c0a"
	I1014 21:18:45.592052  744915 cri.go:89] found id: ""
	I1014 21:18:45.592136  744915 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:18:45.603439  744915 retry.go:31] will retry after 367.564874ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:18:45.971862  744915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:18:45.986279  744915 pause.go:52] kubelet running: false
	I1014 21:18:45.986344  744915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:18:46.167522  744915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:18:46.167693  744915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:18:46.245278  744915 cri.go:89] found id: "f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d"
	I1014 21:18:46.245346  744915 cri.go:89] found id: "a5d1335313edfa9167baa641a267d1bf28579a1fd2d75e288836aca3279714f6"
	I1014 21:18:46.245368  744915 cri.go:89] found id: "f928c3872da21104d13ae2b485be1678e00a6f2bd1882dce4335b5c5bc58ef20"
	I1014 21:18:46.245389  744915 cri.go:89] found id: "06faac1a4580df264944b69244b55b6cef6ece45b1cdd6bf0da2440fcb694194"
	I1014 21:18:46.245412  744915 cri.go:89] found id: "2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c"
	I1014 21:18:46.245434  744915 cri.go:89] found id: "c76dda25418267dfa4c7d4626489bfaeb9c07ffefc2274e1731308f9b5899ab2"
	I1014 21:18:46.245455  744915 cri.go:89] found id: "0b6a360756e087c2443b2b0e2d881459ae06af4340a783adc594f403a6a0e4a7"
	I1014 21:18:46.245477  744915 cri.go:89] found id: "7b9318db856fe0d0f6a7117085a0adce0db784142ed7b06b332460753d08e8ae"
	I1014 21:18:46.245499  744915 cri.go:89] found id: "21c6e7b7c3b753bafef79064f5b38d7a56991049b34f4cd0684b2a1c8ffdd1c6"
	I1014 21:18:46.245521  744915 cri.go:89] found id: "f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	I1014 21:18:46.245540  744915 cri.go:89] found id: "b7fe5f8da58a66adccb01bb7fd806b32a9fe61f3e4fa8aa6b71649dc014f1c0a"
	I1014 21:18:46.245559  744915 cri.go:89] found id: ""
	I1014 21:18:46.245640  744915 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:18:46.261612  744915 out.go:203] 
	W1014 21:18:46.264664  744915 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:18:46.264693  744915 out.go:285] * 
	* 
	W1014 21:18:46.272653  744915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:18:46.275554  744915 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-319221 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-319221
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-319221:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	        "Created": "2025-10-14T21:15:54.491462539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:17:36.706711456Z",
	            "FinishedAt": "2025-10-14T21:17:35.805219634Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hosts",
	        "LogPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3-json.log",
	        "Name": "/default-k8s-diff-port-319221",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-319221:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-319221",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	                "LowerDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/merged",
	                "UpperDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/diff",
	                "WorkDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-319221",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-319221/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-319221",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84bb7ae81f55037a5e990b49d2dbd3e5f620df9b07d39bc57fd9e6d4fa9fe977",
	            "SandboxKey": "/var/run/docker/netns/84bb7ae81f55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-319221": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:3a:70:9a:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a784e56ec6e33ab3395b1054e1c771fec07076ba24af08b06eab32f8be7d3fc1",
	                    "EndpointID": "4143cf89c103cadea106d853ceaa23282d22d382f9847ba01708cb49a8dfeed3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-319221",
	                        "09b53eb1fb5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221: exit status 2 (340.633368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25: (1.39416289s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:17:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:17:57.476873  742505 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:17:57.477089  742505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:57.477117  742505 out.go:374] Setting ErrFile to fd 2...
	I1014 21:17:57.477136  742505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:57.477467  742505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:17:57.477865  742505 out.go:368] Setting JSON to false
	I1014 21:17:57.478895  742505 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14428,"bootTime":1760462250,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:17:57.478988  742505 start.go:141] virtualization:  
	I1014 21:17:57.484559  742505 out.go:179] * [embed-certs-650951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:17:57.489152  742505 notify.go:220] Checking for updates...
	I1014 21:17:57.492470  742505 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:17:57.495732  742505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:17:57.498702  742505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:17:57.501672  742505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:17:57.505508  742505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:17:57.508821  742505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:17:57.512422  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:17:57.512992  742505 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:17:57.559230  742505 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:17:57.559338  742505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:57.663438  742505 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:17:57.650701596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:57.663547  742505 docker.go:318] overlay module found
	I1014 21:17:57.668185  742505 out.go:179] * Using the docker driver based on existing profile
	I1014 21:17:57.671479  742505 start.go:305] selected driver: docker
	I1014 21:17:57.671500  742505 start.go:925] validating driver "docker" against &{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:57.671616  742505 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:17:57.672290  742505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:57.768126  742505 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:17:57.756227697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:57.768489  742505 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:17:57.768526  742505 cni.go:84] Creating CNI manager for ""
	I1014 21:17:57.768586  742505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:17:57.768627  742505 start.go:349] cluster config:
	{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:57.773324  742505 out.go:179] * Starting "embed-certs-650951" primary control-plane node in "embed-certs-650951" cluster
	I1014 21:17:57.776577  742505 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:17:57.779800  742505 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:17:57.783015  742505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:17:57.783080  742505 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:17:57.783096  742505 cache.go:58] Caching tarball of preloaded images
	I1014 21:17:57.783181  742505 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:17:57.783200  742505 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:17:57.783321  742505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:17:57.783539  742505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:17:57.812541  742505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:17:57.812561  742505 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:17:57.812581  742505 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:17:57.812603  742505 start.go:360] acquireMachinesLock for embed-certs-650951: {Name:mkc5e4e33d326b637da386142871e636ddbb8a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:17:57.812652  742505 start.go:364] duration metric: took 33.108µs to acquireMachinesLock for "embed-certs-650951"
	I1014 21:17:57.812671  742505 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:17:57.812682  742505 fix.go:54] fixHost starting: 
	I1014 21:17:57.812935  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:17:57.831951  742505 fix.go:112] recreateIfNeeded on embed-certs-650951: state=Stopped err=<nil>
	W1014 21:17:57.831978  742505 fix.go:138] unexpected machine state, will restart: <nil>
	W1014 21:17:57.573088  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:00.127013  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:17:57.839578  742505 out.go:252] * Restarting existing docker container for "embed-certs-650951" ...
	I1014 21:17:57.839681  742505 cli_runner.go:164] Run: docker start embed-certs-650951
	I1014 21:17:58.181650  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:17:58.213286  742505 kic.go:430] container "embed-certs-650951" state is running.
	I1014 21:17:58.213705  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:17:58.235696  742505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:17:58.235926  742505 machine.go:93] provisionDockerMachine start ...
	I1014 21:17:58.236010  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:17:58.261966  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:17:58.262510  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:17:58.262536  742505 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:17:58.263199  742505 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50052->127.0.0.1:33807: read: connection reset by peer
	I1014 21:18:01.421111  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:18:01.421207  742505 ubuntu.go:182] provisioning hostname "embed-certs-650951"
	I1014 21:18:01.421294  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:01.441647  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:01.441958  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:01.441970  742505 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650951 && echo "embed-certs-650951" | sudo tee /etc/hostname
	I1014 21:18:01.617937  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:18:01.618032  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:01.638882  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:01.639277  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:01.639302  742505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650951/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:18:01.806828  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:18:01.806854  742505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:18:01.806882  742505 ubuntu.go:190] setting up certificates
	I1014 21:18:01.806892  742505 provision.go:84] configureAuth start
	I1014 21:18:01.806967  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:18:01.832517  742505 provision.go:143] copyHostCerts
	I1014 21:18:01.832588  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:18:01.832608  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:18:01.832683  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:18:01.832790  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:18:01.832800  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:18:01.832826  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:18:01.832888  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:18:01.832898  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:18:01.832921  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:18:01.832977  742505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650951 san=[127.0.0.1 192.168.85.2 embed-certs-650951 localhost minikube]
	I1014 21:18:02.127084  742505 provision.go:177] copyRemoteCerts
	I1014 21:18:02.127159  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:18:02.127213  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.144508  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:02.253272  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:18:02.288807  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:18:02.329052  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:18:02.364916  742505 provision.go:87] duration metric: took 557.991625ms to configureAuth
	I1014 21:18:02.364951  742505 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:18:02.365182  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:02.365324  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.389317  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:02.389650  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:02.389673  742505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:18:02.792609  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:18:02.792637  742505 machine.go:96] duration metric: took 4.556694925s to provisionDockerMachine
	I1014 21:18:02.792649  742505 start.go:293] postStartSetup for "embed-certs-650951" (driver="docker")
	I1014 21:18:02.792661  742505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:18:02.792741  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:18:02.792792  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.824153  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:02.948817  742505 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:18:02.952603  742505 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:18:02.952634  742505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:18:02.952646  742505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:18:02.952702  742505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:18:02.952785  742505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:18:02.952916  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:18:02.961221  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:02.981598  742505 start.go:296] duration metric: took 188.932241ms for postStartSetup
	I1014 21:18:02.981681  742505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:18:02.981725  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.004900  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.115468  742505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:18:03.121623  742505 fix.go:56] duration metric: took 5.308939791s for fixHost
	I1014 21:18:03.121693  742505 start.go:83] releasing machines lock for "embed-certs-650951", held for 5.309031337s
	I1014 21:18:03.121805  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:18:03.163181  742505 ssh_runner.go:195] Run: cat /version.json
	I1014 21:18:03.163249  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.167258  742505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:18:03.167329  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.191129  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.191965  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.413838  742505 ssh_runner.go:195] Run: systemctl --version
	I1014 21:18:03.421464  742505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:18:03.466124  742505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:18:03.473317  742505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:18:03.473383  742505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:18:03.481931  742505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:18:03.481958  742505 start.go:495] detecting cgroup driver to use...
	I1014 21:18:03.481989  742505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:18:03.482039  742505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:18:03.499434  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:18:03.514846  742505 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:18:03.514912  742505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:18:03.531867  742505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:18:03.548444  742505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:18:03.700199  742505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:18:03.861053  742505 docker.go:234] disabling docker service ...
	I1014 21:18:03.861131  742505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:18:03.878392  742505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:18:03.892476  742505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:18:04.050884  742505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:18:04.224394  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:18:04.241818  742505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:18:04.262961  742505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:18:04.263025  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.272760  742505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:18:04.272833  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.282223  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.304442  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.317107  742505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:18:04.325809  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.337763  742505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.347332  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.358441  742505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:18:04.368037  742505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:18:04.377582  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:04.547782  742505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:18:05.027711  742505 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:18:05.027801  742505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:18:05.032783  742505 start.go:563] Will wait 60s for crictl version
	I1014 21:18:05.032853  742505 ssh_runner.go:195] Run: which crictl
	I1014 21:18:05.036943  742505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:18:05.073130  742505 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:18:05.073216  742505 ssh_runner.go:195] Run: crio --version
	I1014 21:18:05.110296  742505 ssh_runner.go:195] Run: crio --version
	I1014 21:18:05.150170  742505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1014 21:18:02.574676  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:04.580716  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:05.153171  742505 cli_runner.go:164] Run: docker network inspect embed-certs-650951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:18:05.174634  742505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:18:05.178662  742505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:18:05.192468  742505 kubeadm.go:883] updating cluster {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:18:05.192594  742505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:18:05.192661  742505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:18:05.252551  742505 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:18:05.252578  742505 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:18:05.252634  742505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:18:05.290130  742505 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:18:05.290151  742505 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:18:05.290158  742505 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:18:05.290254  742505 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-650951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:18:05.290329  742505 ssh_runner.go:195] Run: crio config
	I1014 21:18:05.389083  742505 cni.go:84] Creating CNI manager for ""
	I1014 21:18:05.389151  742505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:18:05.389186  742505 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:18:05.389240  742505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650951 NodeName:embed-certs-650951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:18:05.389402  742505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:18:05.389487  742505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:18:05.397560  742505 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:18:05.397712  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:18:05.405445  742505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1014 21:18:05.419121  742505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:18:05.433135  742505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1014 21:18:05.449469  742505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:18:05.453723  742505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:18:05.463833  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:05.611520  742505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:18:05.628815  742505 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951 for IP: 192.168.85.2
	I1014 21:18:05.628885  742505 certs.go:195] generating shared ca certs ...
	I1014 21:18:05.628916  742505 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:05.629086  742505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:18:05.629165  742505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:18:05.629206  742505 certs.go:257] generating profile certs ...
	I1014 21:18:05.629332  742505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.key
	I1014 21:18:05.629450  742505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7
	I1014 21:18:05.629533  742505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key
	I1014 21:18:05.629704  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:18:05.629777  742505 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:18:05.629807  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:18:05.629870  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:18:05.629946  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:18:05.630010  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:18:05.630098  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:05.630942  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:18:05.652032  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:18:05.670884  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:18:05.697811  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:18:05.741151  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 21:18:05.778386  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:18:05.841516  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:18:05.909569  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:18:05.939271  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:18:05.973672  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:18:05.995693  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:18:06.019077  742505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:18:06.037468  742505 ssh_runner.go:195] Run: openssl version
	I1014 21:18:06.044821  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:18:06.054842  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.060603  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.060731  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.111157  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:18:06.120231  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:18:06.129776  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.134771  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.134889  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.181676  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:18:06.191067  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:18:06.200838  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.205566  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.205718  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.266415  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:18:06.275471  742505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:18:06.279802  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:18:06.321931  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:18:06.365588  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:18:06.448454  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:18:06.552603  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:18:06.640427  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:18:06.742487  742505 kubeadm.go:400] StartCluster: {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:18:06.742656  742505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:18:06.742763  742505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:18:06.941950  742505 cri.go:89] found id: "75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be"
	I1014 21:18:06.942032  742505 cri.go:89] found id: "26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637"
	I1014 21:18:06.942051  742505 cri.go:89] found id: "9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af"
	I1014 21:18:06.942075  742505 cri.go:89] found id: "af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c"
	I1014 21:18:06.942107  742505 cri.go:89] found id: ""
	I1014 21:18:06.942193  742505 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:18:06.977621  742505 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:06Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:18:06.977759  742505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:18:07.037201  742505 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:18:07.037271  742505 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:18:07.037361  742505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:18:07.055005  742505 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:18:07.055726  742505 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-650951" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:07.056057  742505 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-650951" cluster setting kubeconfig missing "embed-certs-650951" context setting]
	I1014 21:18:07.056659  742505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.058565  742505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:18:07.072985  742505 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 21:18:07.073069  742505 kubeadm.go:601] duration metric: took 35.777189ms to restartPrimaryControlPlane
	I1014 21:18:07.073112  742505 kubeadm.go:402] duration metric: took 330.640252ms to StartCluster
	I1014 21:18:07.073143  742505 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.073254  742505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:07.074681  742505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.075102  742505 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:18:07.075477  742505 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:18:07.075554  742505 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650951"
	I1014 21:18:07.075568  742505 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-650951"
	W1014 21:18:07.075574  742505 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:18:07.075596  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.076060  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.076495  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:07.076579  742505 addons.go:69] Setting dashboard=true in profile "embed-certs-650951"
	I1014 21:18:07.076628  742505 addons.go:238] Setting addon dashboard=true in "embed-certs-650951"
	W1014 21:18:07.076654  742505 addons.go:247] addon dashboard should already be in state true
	I1014 21:18:07.076706  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.077189  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.084225  742505 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650951"
	I1014 21:18:07.084270  742505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650951"
	I1014 21:18:07.085110  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.092518  742505 out.go:179] * Verifying Kubernetes components...
	I1014 21:18:07.096504  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:07.116938  742505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:18:07.124546  742505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:18:07.128559  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:18:07.128586  742505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:18:07.128669  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.134416  742505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:18:07.140422  742505 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:18:07.140445  742505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:18:07.140512  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.154254  742505 addons.go:238] Setting addon default-storageclass=true in "embed-certs-650951"
	W1014 21:18:07.154281  742505 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:18:07.154305  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.154728  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.197163  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.198078  742505 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:18:07.198092  742505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:18:07.198148  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.210010  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.236532  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.437370  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 21:18:07.084145  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:09.570959  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:07.530450  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:18:07.554601  742505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:18:07.717973  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:18:07.718002  742505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:18:07.857458  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:18:07.857484  742505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:18:07.941400  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:18:07.941429  742505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:18:07.961266  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:18:07.961292  742505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:18:07.994629  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:18:07.994663  742505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:18:08.023726  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:18:08.023754  742505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:18:08.046577  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:18:08.046604  742505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:18:08.078923  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:18:08.078963  742505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:18:08.100349  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:18:08.100372  742505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:18:08.127107  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:18:13.263035  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.825553079s)
	I1014 21:18:13.263138  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.732656299s)
	I1014 21:18:13.263174  742505 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.70853855s)
	I1014 21:18:13.263727  742505 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650951" to be "Ready" ...
	I1014 21:18:13.263252  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.136115007s)
	I1014 21:18:13.268699  742505 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-650951 addons enable metrics-server
	
	I1014 21:18:13.305629  742505 node_ready.go:49] node "embed-certs-650951" is "Ready"
	I1014 21:18:13.305709  742505 node_ready.go:38] duration metric: took 41.946602ms for node "embed-certs-650951" to be "Ready" ...
	I1014 21:18:13.305737  742505 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:18:13.305821  742505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:18:13.319861  742505 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1014 21:18:11.573515  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:14.072008  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:16.072406  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:13.323045  742505 addons.go:514] duration metric: took 6.247551025s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1014 21:18:13.354441  742505 api_server.go:72] duration metric: took 6.27927771s to wait for apiserver process to appear ...
	I1014 21:18:13.354466  742505 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:18:13.354489  742505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:18:13.362861  742505 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:18:13.362893  742505 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:18:13.855131  742505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:18:13.865892  742505 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:18:13.866950  742505 api_server.go:141] control plane version: v1.34.1
	I1014 21:18:13.866972  742505 api_server.go:131] duration metric: took 512.498849ms to wait for apiserver health ...
	I1014 21:18:13.866981  742505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:18:13.874405  742505 system_pods.go:59] 8 kube-system pods found
	I1014 21:18:13.874447  742505 system_pods.go:61] "coredns-66bc5c9577-sllzf" [29df3259-1ac0-4fa0-9cd6-67dd67036cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:18:13.874457  742505 system_pods.go:61] "etcd-embed-certs-650951" [bf83dcd6-0b82-4546-b26c-1a294852d7a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:18:13.874466  742505 system_pods.go:61] "kindnet-x2jcr" [a39835a1-f0d7-4387-9c8d-b91532773cfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1014 21:18:13.874479  742505 system_pods.go:61] "kube-apiserver-embed-certs-650951" [690a7550-709a-4373-b54e-62f54bfcf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:18:13.874487  742505 system_pods.go:61] "kube-controller-manager-embed-certs-650951" [7b3614b2-c0b2-4898-8bed-b927018b1f52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:18:13.874498  742505 system_pods.go:61] "kube-proxy-2xsjs" [0bbf4863-3dcb-41d8-9cb8-ab35659fc408] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:18:13.874505  742505 system_pods.go:61] "kube-scheduler-embed-certs-650951" [76abd18d-656f-4197-b432-65dcd7558c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:18:13.874511  742505 system_pods.go:61] "storage-provisioner" [da644f72-95d9-4486-ae40-223ff853f318] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:18:13.874519  742505 system_pods.go:74] duration metric: took 7.53322ms to wait for pod list to return data ...
	I1014 21:18:13.874531  742505 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:18:13.878570  742505 default_sa.go:45] found service account: "default"
	I1014 21:18:13.878596  742505 default_sa.go:55] duration metric: took 4.057672ms for default service account to be created ...
	I1014 21:18:13.878606  742505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:18:13.882588  742505 system_pods.go:86] 8 kube-system pods found
	I1014 21:18:13.882617  742505 system_pods.go:89] "coredns-66bc5c9577-sllzf" [29df3259-1ac0-4fa0-9cd6-67dd67036cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:18:13.882627  742505 system_pods.go:89] "etcd-embed-certs-650951" [bf83dcd6-0b82-4546-b26c-1a294852d7a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:18:13.882635  742505 system_pods.go:89] "kindnet-x2jcr" [a39835a1-f0d7-4387-9c8d-b91532773cfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1014 21:18:13.882642  742505 system_pods.go:89] "kube-apiserver-embed-certs-650951" [690a7550-709a-4373-b54e-62f54bfcf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:18:13.882653  742505 system_pods.go:89] "kube-controller-manager-embed-certs-650951" [7b3614b2-c0b2-4898-8bed-b927018b1f52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:18:13.882670  742505 system_pods.go:89] "kube-proxy-2xsjs" [0bbf4863-3dcb-41d8-9cb8-ab35659fc408] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:18:13.882680  742505 system_pods.go:89] "kube-scheduler-embed-certs-650951" [76abd18d-656f-4197-b432-65dcd7558c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:18:13.882686  742505 system_pods.go:89] "storage-provisioner" [da644f72-95d9-4486-ae40-223ff853f318] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:18:13.882696  742505 system_pods.go:126] duration metric: took 4.084832ms to wait for k8s-apps to be running ...
	I1014 21:18:13.882707  742505 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:18:13.882764  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:18:13.901083  742505 system_svc.go:56] duration metric: took 18.366032ms WaitForService to wait for kubelet
	I1014 21:18:13.901112  742505 kubeadm.go:586] duration metric: took 6.825954309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:18:13.901134  742505 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:18:13.910641  742505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:18:13.910669  742505 node_conditions.go:123] node cpu capacity is 2
	I1014 21:18:13.910681  742505 node_conditions.go:105] duration metric: took 9.542084ms to run NodePressure ...
	I1014 21:18:13.910692  742505 start.go:241] waiting for startup goroutines ...
	I1014 21:18:13.910699  742505 start.go:246] waiting for cluster config update ...
	I1014 21:18:13.910710  742505 start.go:255] writing updated cluster config ...
	I1014 21:18:13.910978  742505 ssh_runner.go:195] Run: rm -f paused
	I1014 21:18:13.916666  742505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:18:13.920813  742505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sllzf" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:18:15.926460  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:18.076345  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:20.571636  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:18.427740  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:20.926674  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:23.078460  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:25.571467  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:22.939518  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:25.426639  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:27.426807  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:27.571621  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:30.076358  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:31.071756  739852 pod_ready.go:94] pod "coredns-66bc5c9577-lm24g" is "Ready"
	I1014 21:18:31.071785  739852 pod_ready.go:86] duration metric: took 38.006260006s for pod "coredns-66bc5c9577-lm24g" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.074667  739852 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.079996  739852 pod_ready.go:94] pod "etcd-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.080025  739852 pod_ready.go:86] duration metric: took 5.327599ms for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.082573  739852 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.088369  739852 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.088399  739852 pod_ready.go:86] duration metric: took 5.794407ms for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.091160  739852 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.269083  739852 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.269113  739852 pod_ready.go:86] duration metric: took 177.924502ms for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:18:29.926892  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:32.427246  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	I1014 21:18:31.469358  739852 pod_ready.go:83] waiting for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.869597  739852 pod_ready.go:94] pod "kube-proxy-gc4zh" is "Ready"
	I1014 21:18:31.869622  739852 pod_ready.go:86] duration metric: took 400.237084ms for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.070068  739852 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.469305  739852 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:32.469337  739852 pod_ready.go:86] duration metric: took 399.194637ms for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.469351  739852 pod_ready.go:40] duration metric: took 39.407616446s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:18:32.534174  739852 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:18:32.537301  739852 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-319221" cluster and "default" namespace by default
	W1014 21:18:34.431334  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:36.926725  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:38.927558  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:41.427239  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 14 21:18:20 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:20.88263494Z" level=info msg="Removed container fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb/dashboard-metrics-scraper" id=c3a7813a-2a25-424e-8a42-4a661503bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 conmon[1138]: conmon 2401b34dca3a489e7d86 <ninfo>: container 1142 exited with status 1
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.882958051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=73a6100e-3dc3-4008-839e-5436e20e2024 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.887808798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59807aa1-d655-4796-af61-ee390318bda6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.891760392Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0d0cd9d7-4418-4c55-b03d-de922330604f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.892193034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.9050908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907296403Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1615956c6c79e200cf12bbd96ece7276ad16f7de87aed8c35cf82bf3be4b90e5/merged/etc/passwd: no such file or directory"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907486323Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1615956c6c79e200cf12bbd96ece7276ad16f7de87aed8c35cf82bf3be4b90e5/merged/etc/group: no such file or directory"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907840374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.945164577Z" level=info msg="Created container f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d: kube-system/storage-provisioner/storage-provisioner" id=0d0cd9d7-4418-4c55-b03d-de922330604f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.94632045Z" level=info msg="Starting container: f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d" id=7e176023-c102-4d0c-95f2-cc975c781047 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.948536326Z" level=info msg="Started container" PID=1652 containerID=f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d description=kube-system/storage-provisioner/storage-provisioner id=7e176023-c102-4d0c-95f2-cc975c781047 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82f43053fa4a284fc3c56e1617afa4afa71067c2ed37188bb0d272b520a1e7fc
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.313776318Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317801481Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317836739Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317860354Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321014162Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321055336Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321077827Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324462844Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324501967Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324527288Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.328123567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.328158825Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f361b540f6ffb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   82f43053fa4a2       storage-provisioner                                    kube-system
	f0c2c88aa3e33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   a87bcdfa16ba4       dashboard-metrics-scraper-6ffb444bf9-fzndb             kubernetes-dashboard
	b7fe5f8da58a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   455839f49512b       kubernetes-dashboard-855c9754f9-kvztt                  kubernetes-dashboard
	a5d1335313edf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   c1b6547ca8e2b       coredns-66bc5c9577-lm24g                               kube-system
	634729bf4e527       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   d6007dc5b48d5       busybox                                                default
	f928c3872da21       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   d7aae5f9c166b       kindnet-5hx2c                                          kube-system
	06faac1a4580d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   8e8e8da690fdd       kube-proxy-gc4zh                                       kube-system
	2401b34dca3a4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   82f43053fa4a2       storage-provisioner                                    kube-system
	c76dda2541826       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d08b27be3c186       kube-scheduler-default-k8s-diff-port-319221            kube-system
	0b6a360756e08       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3d6d77b0a4f6c       kube-apiserver-default-k8s-diff-port-319221            kube-system
	7b9318db856fe       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a2eda7660c50a       etcd-default-k8s-diff-port-319221                      kube-system
	21c6e7b7c3b75       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   57de1d66f4f1d       kube-controller-manager-default-k8s-diff-port-319221   kube-system
	
	
	==> coredns [a5d1335313edfa9167baa641a267d1bf28579a1fd2d75e288836aca3279714f6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36107 - 34192 "HINFO IN 663654126897797571.5184740510088049192. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023114193s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-319221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-319221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=default-k8s-diff-port-319221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-319221
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-319221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f79511d9bd8b4db9ad629d2c2b7207bb
	  System UUID:                a87a8a0d-9a85-45a1-87ca-6b235dc2ea96
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-lm24g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-319221                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-5hx2c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-319221             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-319221    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-gc4zh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-319221             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fzndb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kvztt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-319221 event: Registered Node default-k8s-diff-port-319221 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-319221 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-319221 event: Registered Node default-k8s-diff-port-319221 in Controller
	
	
	==> dmesg <==
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b9318db856fe0d0f6a7117085a0adce0db784142ed7b06b332460753d08e8ae] <==
	{"level":"warn","ts":"2025-10-14T21:17:49.746236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.767612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.782326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.805742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.835494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.846650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.863856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.884781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.898057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.913302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.931797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.947761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.964833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.981176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.998023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.026504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.040566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.058708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.076098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.110715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.161489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.196884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.211452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.233581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.292897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:47 up  4:01,  0 user,  load average: 3.64, 2.52, 1.98
	Linux default-k8s-diff-port-319221 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f928c3872da21104d13ae2b485be1678e00a6f2bd1882dce4335b5c5bc58ef20] <==
	I1014 21:17:52.077803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:17:52.078822       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:17:52.079001       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:17:52.079119       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:17:52.079175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:17:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:17:52.315351       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:17:52.315458       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:17:52.315493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:17:52.318531       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:18:22.315344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:18:22.318518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:18:22.318617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:18:22.318709       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1014 21:18:23.917895       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:18:23.917986       1 metrics.go:72] Registering metrics
	I1014 21:18:23.918077       1 controller.go:711] "Syncing nftables rules"
	I1014 21:18:32.312957       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:18:32.313008       1 main.go:301] handling current node
	I1014 21:18:42.317923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:18:42.317961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b6a360756e087c2443b2b0e2d881459ae06af4340a783adc594f403a6a0e4a7] <==
	I1014 21:17:51.192509       1 policy_source.go:240] refreshing policies
	I1014 21:17:51.192736       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:17:51.198899       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:17:51.219526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:17:51.219594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:17:51.264587       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:17:51.267582       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:17:51.279090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:17:51.279399       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:17:51.279429       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:17:51.279437       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:17:51.288877       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:17:51.288975       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:17:51.328990       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:17:51.600715       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:17:51.988516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:17:52.184686       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:17:52.336052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:17:52.373023       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:17:52.385498       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:17:52.491377       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.85.52"}
	I1014 21:17:52.511617       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.125.45"}
	I1014 21:17:54.640687       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:17:55.041527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:17:55.093729       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [21c6e7b7c3b753bafef79064f5b38d7a56991049b34f4cd0684b2a1c8ffdd1c6] <==
	I1014 21:17:54.543988       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:17:54.546482       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:17:54.546976       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:17:54.548215       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 21:17:54.551470       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:17:54.553655       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:17:54.553730       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:17:54.553737       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:17:54.553743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 21:17:54.557460       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:17:54.557565       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:17:54.559319       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:17:54.561721       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1014 21:17:54.564198       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:17:54.567494       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:17:54.568742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:17:54.569914       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:17:54.573190       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:17:54.583659       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:17:54.583726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:17:54.583785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:17:54.583852       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:17:54.584290       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:17:54.584884       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:17:54.602571       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [06faac1a4580df264944b69244b55b6cef6ece45b1cdd6bf0da2440fcb694194] <==
	I1014 21:17:52.522402       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:17:52.622291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:17:52.722380       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:17:52.722480       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:17:52.722577       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:17:52.740218       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:17:52.740357       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:17:52.743867       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:17:52.744171       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:17:52.744239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:17:52.747732       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:17:52.747803       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:17:52.748124       1 config.go:200] "Starting service config controller"
	I1014 21:17:52.748192       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:17:52.748692       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:17:52.748740       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:17:52.749198       1 config.go:309] "Starting node config controller"
	I1014 21:17:52.749245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:17:52.749274       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:17:52.848829       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:17:52.848812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:17:52.848853       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c76dda25418267dfa4c7d4626489bfaeb9c07ffefc2274e1731308f9b5899ab2] <==
	I1014 21:17:48.354929       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:17:51.129644       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:17:51.129689       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:17:51.129704       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:17:51.129711       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:17:51.261950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:17:51.261981       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:17:51.267320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:17:51.272406       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:17:51.273376       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:17:51.273436       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:17:51.373226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:17:55 default-k8s-diff-port-319221 kubelet[780]: W1014 21:17:55.504517     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062 WatchSource:0}: Error finding container a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062: Status 404 returned error can't find the container with id a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062
	Oct 14 21:17:55 default-k8s-diff-port-319221 kubelet[780]: W1014 21:17:55.514405     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b WatchSource:0}: Error finding container 455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b: Status 404 returned error can't find the container with id 455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b
	Oct 14 21:18:00 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:00.561640     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 21:18:00 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:00.796164     780 scope.go:117] "RemoveContainer" containerID="ba6c1d7b3b544f7f7b5f38844782ff3b9db297b3b67aa844e397f62bc32dbf94"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:01.804202     780 scope.go:117] "RemoveContainer" containerID="ba6c1d7b3b544f7f7b5f38844782ff3b9db297b3b67aa844e397f62bc32dbf94"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:01.804973     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:01.805257     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:02 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:02.825675     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:02 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:02.829347     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:05 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:05.474955     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:05 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:05.475132     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:20 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:20.689208     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:20 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:20.871201     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:21.875157     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:21.875761     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:21.896853     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvztt" podStartSLOduration=15.031349991 podStartE2EDuration="26.895418972s" podCreationTimestamp="2025-10-14 21:17:55 +0000 UTC" firstStartedPulling="2025-10-14 21:17:55.517436255 +0000 UTC m=+11.165205214" lastFinishedPulling="2025-10-14 21:18:07.381505237 +0000 UTC m=+23.029274195" observedRunningTime="2025-10-14 21:18:07.862759818 +0000 UTC m=+23.510528785" watchObservedRunningTime="2025-10-14 21:18:21.895418972 +0000 UTC m=+37.543187930"
	Oct 14 21:18:22 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:22.882078     780 scope.go:117] "RemoveContainer" containerID="2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c"
	Oct 14 21:18:25 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:25.474545     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:25 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:25.475202     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:37 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:37.689954     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:37 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:37.690150     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:44 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:44.821532     780 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-d6007dc5b48d505a48ea6d8ce98ae4cf547a248f9e8df04f475d8baa1875295b\": RecentStats: unable to find data in memory cache]"
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7fe5f8da58a66adccb01bb7fd806b32a9fe61f3e4fa8aa6b71649dc014f1c0a] <==
	2025/10/14 21:18:07 Using namespace: kubernetes-dashboard
	2025/10/14 21:18:07 Using in-cluster config to connect to apiserver
	2025/10/14 21:18:07 Using secret token for csrf signing
	2025/10/14 21:18:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:18:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:18:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:18:07 Generating JWE encryption key
	2025/10/14 21:18:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:18:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:18:07 Initializing JWE encryption key from synchronized object
	2025/10/14 21:18:07 Creating in-cluster Sidecar client
	2025/10/14 21:18:07 Serving insecurely on HTTP port: 9090
	2025/10/14 21:18:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:07 Starting overwatch
	
	
	==> storage-provisioner [2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c] <==
	I1014 21:17:52.085373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:18:22.103938       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d] <==
	I1014 21:18:22.988548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:18:23.018664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:18:23.019022       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:18:23.026131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:26.481822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:30.742852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:34.340761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:37.394843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.416489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.424484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:18:40.424635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:18:40.424790       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7!
	I1014 21:18:40.425684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32eebcf8-6bdd-4075-bafa-c8aa6a4d9e02", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7 became leader
	W1014 21:18:40.434348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.438065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:18:40.525454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7!
	W1014 21:18:42.441308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:42.446269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:44.449587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:44.456797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:46.460114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:46.464991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221: exit status 2 (370.473805ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-319221
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-319221:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	        "Created": "2025-10-14T21:15:54.491462539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:17:36.706711456Z",
	            "FinishedAt": "2025-10-14T21:17:35.805219634Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/hosts",
	        "LogPath": "/var/lib/docker/containers/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3-json.log",
	        "Name": "/default-k8s-diff-port-319221",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-319221:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-319221",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3",
	                "LowerDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/merged",
	                "UpperDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/diff",
	                "WorkDir": "/var/lib/docker/overlay2/882da2e1fffe67015b2504b67fde7e27d0bff25149ca31ed071d01511fda8d09/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-319221",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-319221/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-319221",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-319221",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84bb7ae81f55037a5e990b49d2dbd3e5f620df9b07d39bc57fd9e6d4fa9fe977",
	            "SandboxKey": "/var/run/docker/netns/84bb7ae81f55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-319221": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:3a:70:9a:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a784e56ec6e33ab3395b1054e1c771fec07076ba24af08b06eab32f8be7d3fc1",
	                    "EndpointID": "4143cf89c103cadea106d853ceaa23282d22d382f9847ba01708cb49a8dfeed3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-319221",
	                        "09b53eb1fb5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221: exit status 2 (358.094133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-319221 logs -n 25: (1.347867843s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-967887 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ delete  │ -p cert-options-967887                                                                                                                                                                                                                        │ cert-options-967887          │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:13 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:13 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-624341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │                     │
	│ stop    │ -p old-k8s-version-624341 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:14 UTC │
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:17:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:17:57.476873  742505 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:17:57.477089  742505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:57.477117  742505 out.go:374] Setting ErrFile to fd 2...
	I1014 21:17:57.477136  742505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:17:57.477467  742505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:17:57.477865  742505 out.go:368] Setting JSON to false
	I1014 21:17:57.478895  742505 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14428,"bootTime":1760462250,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:17:57.478988  742505 start.go:141] virtualization:  
	I1014 21:17:57.484559  742505 out.go:179] * [embed-certs-650951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:17:57.489152  742505 notify.go:220] Checking for updates...
	I1014 21:17:57.492470  742505 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:17:57.495732  742505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:17:57.498702  742505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:17:57.501672  742505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:17:57.505508  742505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:17:57.508821  742505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:17:57.512422  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:17:57.512992  742505 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:17:57.559230  742505 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:17:57.559338  742505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:57.663438  742505 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:17:57.650701596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:57.663547  742505 docker.go:318] overlay module found
	I1014 21:17:57.668185  742505 out.go:179] * Using the docker driver based on existing profile
	I1014 21:17:57.671479  742505 start.go:305] selected driver: docker
	I1014 21:17:57.671500  742505 start.go:925] validating driver "docker" against &{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:57.671616  742505 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:17:57.672290  742505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:17:57.768126  742505 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:17:57.756227697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:17:57.768489  742505 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:17:57.768526  742505 cni.go:84] Creating CNI manager for ""
	I1014 21:17:57.768586  742505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:17:57.768627  742505 start.go:349] cluster config:
	{Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:17:57.773324  742505 out.go:179] * Starting "embed-certs-650951" primary control-plane node in "embed-certs-650951" cluster
	I1014 21:17:57.776577  742505 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:17:57.779800  742505 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:17:57.783015  742505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:17:57.783080  742505 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:17:57.783096  742505 cache.go:58] Caching tarball of preloaded images
	I1014 21:17:57.783181  742505 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:17:57.783200  742505 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:17:57.783321  742505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:17:57.783539  742505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:17:57.812541  742505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:17:57.812561  742505 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:17:57.812581  742505 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:17:57.812603  742505 start.go:360] acquireMachinesLock for embed-certs-650951: {Name:mkc5e4e33d326b637da386142871e636ddbb8a63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:17:57.812652  742505 start.go:364] duration metric: took 33.108µs to acquireMachinesLock for "embed-certs-650951"
	I1014 21:17:57.812671  742505 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:17:57.812682  742505 fix.go:54] fixHost starting: 
	I1014 21:17:57.812935  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:17:57.831951  742505 fix.go:112] recreateIfNeeded on embed-certs-650951: state=Stopped err=<nil>
	W1014 21:17:57.831978  742505 fix.go:138] unexpected machine state, will restart: <nil>
	W1014 21:17:57.573088  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:00.127013  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:17:57.839578  742505 out.go:252] * Restarting existing docker container for "embed-certs-650951" ...
	I1014 21:17:57.839681  742505 cli_runner.go:164] Run: docker start embed-certs-650951
	I1014 21:17:58.181650  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:17:58.213286  742505 kic.go:430] container "embed-certs-650951" state is running.
	I1014 21:17:58.213705  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:17:58.235696  742505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/config.json ...
	I1014 21:17:58.235926  742505 machine.go:93] provisionDockerMachine start ...
	I1014 21:17:58.236010  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:17:58.261966  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:17:58.262510  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:17:58.262536  742505 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:17:58.263199  742505 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50052->127.0.0.1:33807: read: connection reset by peer
	I1014 21:18:01.421111  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:18:01.421207  742505 ubuntu.go:182] provisioning hostname "embed-certs-650951"
	I1014 21:18:01.421294  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:01.441647  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:01.441958  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:01.441970  742505 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650951 && echo "embed-certs-650951" | sudo tee /etc/hostname
	I1014 21:18:01.617937  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650951
	
	I1014 21:18:01.618032  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:01.638882  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:01.639277  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:01.639302  742505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650951/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:18:01.806828  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:18:01.806854  742505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:18:01.806882  742505 ubuntu.go:190] setting up certificates
	I1014 21:18:01.806892  742505 provision.go:84] configureAuth start
	I1014 21:18:01.806967  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:18:01.832517  742505 provision.go:143] copyHostCerts
	I1014 21:18:01.832588  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:18:01.832608  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:18:01.832683  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:18:01.832790  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:18:01.832800  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:18:01.832826  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:18:01.832888  742505 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:18:01.832898  742505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:18:01.832921  742505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:18:01.832977  742505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650951 san=[127.0.0.1 192.168.85.2 embed-certs-650951 localhost minikube]
	I1014 21:18:02.127084  742505 provision.go:177] copyRemoteCerts
	I1014 21:18:02.127159  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:18:02.127213  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.144508  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:02.253272  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:18:02.288807  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:18:02.329052  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:18:02.364916  742505 provision.go:87] duration metric: took 557.991625ms to configureAuth
	I1014 21:18:02.364951  742505 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:18:02.365182  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:02.365324  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.389317  742505 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:02.389650  742505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1014 21:18:02.389673  742505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:18:02.792609  742505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:18:02.792637  742505 machine.go:96] duration metric: took 4.556694925s to provisionDockerMachine
	I1014 21:18:02.792649  742505 start.go:293] postStartSetup for "embed-certs-650951" (driver="docker")
	I1014 21:18:02.792661  742505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:18:02.792741  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:18:02.792792  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:02.824153  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:02.948817  742505 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:18:02.952603  742505 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:18:02.952634  742505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:18:02.952646  742505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:18:02.952702  742505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:18:02.952785  742505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:18:02.952916  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:18:02.961221  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:02.981598  742505 start.go:296] duration metric: took 188.932241ms for postStartSetup
	I1014 21:18:02.981681  742505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:18:02.981725  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.004900  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.115468  742505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:18:03.121623  742505 fix.go:56] duration metric: took 5.308939791s for fixHost
	I1014 21:18:03.121693  742505 start.go:83] releasing machines lock for "embed-certs-650951", held for 5.309031337s
	I1014 21:18:03.121805  742505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-650951
	I1014 21:18:03.163181  742505 ssh_runner.go:195] Run: cat /version.json
	I1014 21:18:03.163249  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.167258  742505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:18:03.167329  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:03.191129  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.191965  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:03.413838  742505 ssh_runner.go:195] Run: systemctl --version
	I1014 21:18:03.421464  742505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:18:03.466124  742505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:18:03.473317  742505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:18:03.473383  742505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:18:03.481931  742505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:18:03.481958  742505 start.go:495] detecting cgroup driver to use...
	I1014 21:18:03.481989  742505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:18:03.482039  742505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:18:03.499434  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:18:03.514846  742505 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:18:03.514912  742505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:18:03.531867  742505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:18:03.548444  742505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:18:03.700199  742505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:18:03.861053  742505 docker.go:234] disabling docker service ...
	I1014 21:18:03.861131  742505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:18:03.878392  742505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:18:03.892476  742505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:18:04.050884  742505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:18:04.224394  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:18:04.241818  742505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:18:04.262961  742505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:18:04.263025  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.272760  742505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:18:04.272833  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.282223  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.304442  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.317107  742505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:18:04.325809  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.337763  742505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.347332  742505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:18:04.358441  742505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:18:04.368037  742505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:18:04.377582  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:04.547782  742505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:18:05.027711  742505 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:18:05.027801  742505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:18:05.032783  742505 start.go:563] Will wait 60s for crictl version
	I1014 21:18:05.032853  742505 ssh_runner.go:195] Run: which crictl
	I1014 21:18:05.036943  742505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:18:05.073130  742505 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:18:05.073216  742505 ssh_runner.go:195] Run: crio --version
	I1014 21:18:05.110296  742505 ssh_runner.go:195] Run: crio --version
	I1014 21:18:05.150170  742505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1014 21:18:02.574676  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:04.580716  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:05.153171  742505 cli_runner.go:164] Run: docker network inspect embed-certs-650951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:18:05.174634  742505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:18:05.178662  742505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:18:05.192468  742505 kubeadm.go:883] updating cluster {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:18:05.192594  742505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:18:05.192661  742505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:18:05.252551  742505 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:18:05.252578  742505 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:18:05.252634  742505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:18:05.290130  742505 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:18:05.290151  742505 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:18:05.290158  742505 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:18:05.290254  742505 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-650951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:18:05.290329  742505 ssh_runner.go:195] Run: crio config
	I1014 21:18:05.389083  742505 cni.go:84] Creating CNI manager for ""
	I1014 21:18:05.389151  742505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:18:05.389186  742505 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:18:05.389240  742505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650951 NodeName:embed-certs-650951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:18:05.389402  742505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:18:05.389487  742505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:18:05.397560  742505 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:18:05.397712  742505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:18:05.405445  742505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1014 21:18:05.419121  742505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:18:05.433135  742505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1014 21:18:05.449469  742505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:18:05.453723  742505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:18:05.463833  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:05.611520  742505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:18:05.628815  742505 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951 for IP: 192.168.85.2
	I1014 21:18:05.628885  742505 certs.go:195] generating shared ca certs ...
	I1014 21:18:05.628916  742505 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:05.629086  742505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:18:05.629165  742505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:18:05.629206  742505 certs.go:257] generating profile certs ...
	I1014 21:18:05.629332  742505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/client.key
	I1014 21:18:05.629450  742505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key.613231d7
	I1014 21:18:05.629533  742505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key
	I1014 21:18:05.629704  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:18:05.629777  742505 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:18:05.629807  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:18:05.629870  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:18:05.629946  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:18:05.630010  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:18:05.630098  742505 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:05.630942  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:18:05.652032  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:18:05.670884  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:18:05.697811  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:18:05.741151  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 21:18:05.778386  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:18:05.841516  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:18:05.909569  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/embed-certs-650951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:18:05.939271  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:18:05.973672  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:18:05.995693  742505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:18:06.019077  742505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:18:06.037468  742505 ssh_runner.go:195] Run: openssl version
	I1014 21:18:06.044821  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:18:06.054842  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.060603  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.060731  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:18:06.111157  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:18:06.120231  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:18:06.129776  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.134771  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.134889  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:18:06.181676  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:18:06.191067  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:18:06.200838  742505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.205566  742505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.205718  742505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:18:06.266415  742505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:18:06.275471  742505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:18:06.279802  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:18:06.321931  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:18:06.365588  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:18:06.448454  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:18:06.552603  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:18:06.640427  742505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:18:06.742487  742505 kubeadm.go:400] StartCluster: {Name:embed-certs-650951 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:18:06.742656  742505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:18:06.742763  742505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:18:06.941950  742505 cri.go:89] found id: "75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be"
	I1014 21:18:06.942032  742505 cri.go:89] found id: "26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637"
	I1014 21:18:06.942051  742505 cri.go:89] found id: "9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af"
	I1014 21:18:06.942075  742505 cri.go:89] found id: "af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c"
	I1014 21:18:06.942107  742505 cri.go:89] found id: ""
	I1014 21:18:06.942193  742505 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:18:06.977621  742505 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:18:06Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:18:06.977759  742505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:18:07.037201  742505 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:18:07.037271  742505 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:18:07.037361  742505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:18:07.055005  742505 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:18:07.055726  742505 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-650951" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:07.056057  742505 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-650951" cluster setting kubeconfig missing "embed-certs-650951" context setting]
	I1014 21:18:07.056659  742505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.058565  742505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:18:07.072985  742505 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 21:18:07.073069  742505 kubeadm.go:601] duration metric: took 35.777189ms to restartPrimaryControlPlane
	I1014 21:18:07.073112  742505 kubeadm.go:402] duration metric: took 330.640252ms to StartCluster
	I1014 21:18:07.073143  742505 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.073254  742505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:07.074681  742505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:07.075102  742505 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:18:07.075477  742505 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:18:07.075554  742505 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650951"
	I1014 21:18:07.075568  742505 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-650951"
	W1014 21:18:07.075574  742505 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:18:07.075596  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.076060  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.076495  742505 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:07.076579  742505 addons.go:69] Setting dashboard=true in profile "embed-certs-650951"
	I1014 21:18:07.076628  742505 addons.go:238] Setting addon dashboard=true in "embed-certs-650951"
	W1014 21:18:07.076654  742505 addons.go:247] addon dashboard should already be in state true
	I1014 21:18:07.076706  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.077189  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.084225  742505 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650951"
	I1014 21:18:07.084270  742505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650951"
	I1014 21:18:07.085110  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.092518  742505 out.go:179] * Verifying Kubernetes components...
	I1014 21:18:07.096504  742505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:18:07.116938  742505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:18:07.124546  742505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:18:07.128559  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:18:07.128586  742505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:18:07.128669  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.134416  742505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:18:07.140422  742505 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:18:07.140445  742505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:18:07.140512  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.154254  742505 addons.go:238] Setting addon default-storageclass=true in "embed-certs-650951"
	W1014 21:18:07.154281  742505 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:18:07.154305  742505 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:18:07.154728  742505 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:18:07.197163  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.198078  742505 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:18:07.198092  742505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:18:07.198148  742505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:18:07.210010  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.236532  742505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:18:07.437370  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 21:18:07.084145  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:09.570959  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:07.530450  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:18:07.554601  742505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:18:07.717973  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:18:07.718002  742505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:18:07.857458  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:18:07.857484  742505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:18:07.941400  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:18:07.941429  742505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:18:07.961266  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:18:07.961292  742505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:18:07.994629  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:18:07.994663  742505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:18:08.023726  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:18:08.023754  742505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:18:08.046577  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:18:08.046604  742505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:18:08.078923  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:18:08.078963  742505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:18:08.100349  742505 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:18:08.100372  742505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:18:08.127107  742505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:18:13.263035  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.825553079s)
	I1014 21:18:13.263138  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.732656299s)
	I1014 21:18:13.263174  742505 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.70853855s)
	I1014 21:18:13.263727  742505 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650951" to be "Ready" ...
	I1014 21:18:13.263252  742505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.136115007s)
	I1014 21:18:13.268699  742505 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-650951 addons enable metrics-server
	
	I1014 21:18:13.305629  742505 node_ready.go:49] node "embed-certs-650951" is "Ready"
	I1014 21:18:13.305709  742505 node_ready.go:38] duration metric: took 41.946602ms for node "embed-certs-650951" to be "Ready" ...
	I1014 21:18:13.305737  742505 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:18:13.305821  742505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:18:13.319861  742505 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1014 21:18:11.573515  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:14.072008  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:16.072406  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:13.323045  742505 addons.go:514] duration metric: took 6.247551025s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1014 21:18:13.354441  742505 api_server.go:72] duration metric: took 6.27927771s to wait for apiserver process to appear ...
	I1014 21:18:13.354466  742505 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:18:13.354489  742505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:18:13.362861  742505 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:18:13.362893  742505 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:18:13.855131  742505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:18:13.865892  742505 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:18:13.866950  742505 api_server.go:141] control plane version: v1.34.1
	I1014 21:18:13.866972  742505 api_server.go:131] duration metric: took 512.498849ms to wait for apiserver health ...
	I1014 21:18:13.866981  742505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:18:13.874405  742505 system_pods.go:59] 8 kube-system pods found
	I1014 21:18:13.874447  742505 system_pods.go:61] "coredns-66bc5c9577-sllzf" [29df3259-1ac0-4fa0-9cd6-67dd67036cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:18:13.874457  742505 system_pods.go:61] "etcd-embed-certs-650951" [bf83dcd6-0b82-4546-b26c-1a294852d7a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:18:13.874466  742505 system_pods.go:61] "kindnet-x2jcr" [a39835a1-f0d7-4387-9c8d-b91532773cfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1014 21:18:13.874479  742505 system_pods.go:61] "kube-apiserver-embed-certs-650951" [690a7550-709a-4373-b54e-62f54bfcf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:18:13.874487  742505 system_pods.go:61] "kube-controller-manager-embed-certs-650951" [7b3614b2-c0b2-4898-8bed-b927018b1f52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:18:13.874498  742505 system_pods.go:61] "kube-proxy-2xsjs" [0bbf4863-3dcb-41d8-9cb8-ab35659fc408] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:18:13.874505  742505 system_pods.go:61] "kube-scheduler-embed-certs-650951" [76abd18d-656f-4197-b432-65dcd7558c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:18:13.874511  742505 system_pods.go:61] "storage-provisioner" [da644f72-95d9-4486-ae40-223ff853f318] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:18:13.874519  742505 system_pods.go:74] duration metric: took 7.53322ms to wait for pod list to return data ...
	I1014 21:18:13.874531  742505 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:18:13.878570  742505 default_sa.go:45] found service account: "default"
	I1014 21:18:13.878596  742505 default_sa.go:55] duration metric: took 4.057672ms for default service account to be created ...
	I1014 21:18:13.878606  742505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:18:13.882588  742505 system_pods.go:86] 8 kube-system pods found
	I1014 21:18:13.882617  742505 system_pods.go:89] "coredns-66bc5c9577-sllzf" [29df3259-1ac0-4fa0-9cd6-67dd67036cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:18:13.882627  742505 system_pods.go:89] "etcd-embed-certs-650951" [bf83dcd6-0b82-4546-b26c-1a294852d7a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:18:13.882635  742505 system_pods.go:89] "kindnet-x2jcr" [a39835a1-f0d7-4387-9c8d-b91532773cfb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1014 21:18:13.882642  742505 system_pods.go:89] "kube-apiserver-embed-certs-650951" [690a7550-709a-4373-b54e-62f54bfcf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:18:13.882653  742505 system_pods.go:89] "kube-controller-manager-embed-certs-650951" [7b3614b2-c0b2-4898-8bed-b927018b1f52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:18:13.882670  742505 system_pods.go:89] "kube-proxy-2xsjs" [0bbf4863-3dcb-41d8-9cb8-ab35659fc408] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:18:13.882680  742505 system_pods.go:89] "kube-scheduler-embed-certs-650951" [76abd18d-656f-4197-b432-65dcd7558c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:18:13.882686  742505 system_pods.go:89] "storage-provisioner" [da644f72-95d9-4486-ae40-223ff853f318] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 21:18:13.882696  742505 system_pods.go:126] duration metric: took 4.084832ms to wait for k8s-apps to be running ...
	I1014 21:18:13.882707  742505 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:18:13.882764  742505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:18:13.901083  742505 system_svc.go:56] duration metric: took 18.366032ms WaitForService to wait for kubelet
	I1014 21:18:13.901112  742505 kubeadm.go:586] duration metric: took 6.825954309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:18:13.901134  742505 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:18:13.910641  742505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:18:13.910669  742505 node_conditions.go:123] node cpu capacity is 2
	I1014 21:18:13.910681  742505 node_conditions.go:105] duration metric: took 9.542084ms to run NodePressure ...
	I1014 21:18:13.910692  742505 start.go:241] waiting for startup goroutines ...
	I1014 21:18:13.910699  742505 start.go:246] waiting for cluster config update ...
	I1014 21:18:13.910710  742505 start.go:255] writing updated cluster config ...
	I1014 21:18:13.910978  742505 ssh_runner.go:195] Run: rm -f paused
	I1014 21:18:13.916666  742505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:18:13.920813  742505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sllzf" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:18:15.926460  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:18.076345  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:20.571636  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:18.427740  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:20.926674  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:23.078460  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:25.571467  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:22.939518  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:25.426639  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:27.426807  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:27.571621  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	W1014 21:18:30.076358  739852 pod_ready.go:104] pod "coredns-66bc5c9577-lm24g" is not "Ready", error: <nil>
	I1014 21:18:31.071756  739852 pod_ready.go:94] pod "coredns-66bc5c9577-lm24g" is "Ready"
	I1014 21:18:31.071785  739852 pod_ready.go:86] duration metric: took 38.006260006s for pod "coredns-66bc5c9577-lm24g" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.074667  739852 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.079996  739852 pod_ready.go:94] pod "etcd-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.080025  739852 pod_ready.go:86] duration metric: took 5.327599ms for pod "etcd-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.082573  739852 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.088369  739852 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.088399  739852 pod_ready.go:86] duration metric: took 5.794407ms for pod "kube-apiserver-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.091160  739852 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.269083  739852 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:31.269113  739852 pod_ready.go:86] duration metric: took 177.924502ms for pod "kube-controller-manager-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:18:29.926892  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:32.427246  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	I1014 21:18:31.469358  739852 pod_ready.go:83] waiting for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:31.869597  739852 pod_ready.go:94] pod "kube-proxy-gc4zh" is "Ready"
	I1014 21:18:31.869622  739852 pod_ready.go:86] duration metric: took 400.237084ms for pod "kube-proxy-gc4zh" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.070068  739852 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.469305  739852 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-319221" is "Ready"
	I1014 21:18:32.469337  739852 pod_ready.go:86] duration metric: took 399.194637ms for pod "kube-scheduler-default-k8s-diff-port-319221" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:18:32.469351  739852 pod_ready.go:40] duration metric: took 39.407616446s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:18:32.534174  739852 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:18:32.537301  739852 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-319221" cluster and "default" namespace by default
	W1014 21:18:34.431334  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:36.926725  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:38.927558  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:41.427239  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:43.925639  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	W1014 21:18:45.926493  742505 pod_ready.go:104] pod "coredns-66bc5c9577-sllzf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 14 21:18:20 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:20.88263494Z" level=info msg="Removed container fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb/dashboard-metrics-scraper" id=c3a7813a-2a25-424e-8a42-4a661503bf90 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 conmon[1138]: conmon 2401b34dca3a489e7d86 <ninfo>: container 1142 exited with status 1
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.882958051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=73a6100e-3dc3-4008-839e-5436e20e2024 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.887808798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59807aa1-d655-4796-af61-ee390318bda6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.891760392Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0d0cd9d7-4418-4c55-b03d-de922330604f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.892193034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.9050908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907296403Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1615956c6c79e200cf12bbd96ece7276ad16f7de87aed8c35cf82bf3be4b90e5/merged/etc/passwd: no such file or directory"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907486323Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1615956c6c79e200cf12bbd96ece7276ad16f7de87aed8c35cf82bf3be4b90e5/merged/etc/group: no such file or directory"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.907840374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.945164577Z" level=info msg="Created container f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d: kube-system/storage-provisioner/storage-provisioner" id=0d0cd9d7-4418-4c55-b03d-de922330604f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.94632045Z" level=info msg="Starting container: f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d" id=7e176023-c102-4d0c-95f2-cc975c781047 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:18:22 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:22.948536326Z" level=info msg="Started container" PID=1652 containerID=f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d description=kube-system/storage-provisioner/storage-provisioner id=7e176023-c102-4d0c-95f2-cc975c781047 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82f43053fa4a284fc3c56e1617afa4afa71067c2ed37188bb0d272b520a1e7fc
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.313776318Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317801481Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317836739Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.317860354Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321014162Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321055336Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.321077827Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324462844Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324501967Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.324527288Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.328123567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:32 default-k8s-diff-port-319221 crio[651]: time="2025-10-14T21:18:32.328158825Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f361b540f6ffb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   82f43053fa4a2       storage-provisioner                                    kube-system
	f0c2c88aa3e33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   a87bcdfa16ba4       dashboard-metrics-scraper-6ffb444bf9-fzndb             kubernetes-dashboard
	b7fe5f8da58a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   455839f49512b       kubernetes-dashboard-855c9754f9-kvztt                  kubernetes-dashboard
	a5d1335313edf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   c1b6547ca8e2b       coredns-66bc5c9577-lm24g                               kube-system
	634729bf4e527       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   d6007dc5b48d5       busybox                                                default
	f928c3872da21       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   d7aae5f9c166b       kindnet-5hx2c                                          kube-system
	06faac1a4580d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   8e8e8da690fdd       kube-proxy-gc4zh                                       kube-system
	2401b34dca3a4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   82f43053fa4a2       storage-provisioner                                    kube-system
	c76dda2541826       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d08b27be3c186       kube-scheduler-default-k8s-diff-port-319221            kube-system
	0b6a360756e08       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3d6d77b0a4f6c       kube-apiserver-default-k8s-diff-port-319221            kube-system
	7b9318db856fe       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a2eda7660c50a       etcd-default-k8s-diff-port-319221                      kube-system
	21c6e7b7c3b75       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   57de1d66f4f1d       kube-controller-manager-default-k8s-diff-port-319221   kube-system
	
	
	==> coredns [a5d1335313edfa9167baa641a267d1bf28579a1fd2d75e288836aca3279714f6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36107 - 34192 "HINFO IN 663654126897797571.5184740510088049192. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023114193s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-319221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-319221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=default-k8s-diff-port-319221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-319221
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:18:42 +0000   Tue, 14 Oct 2025 21:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-319221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f79511d9bd8b4db9ad629d2c2b7207bb
	  System UUID:                a87a8a0d-9a85-45a1-87ca-6b235dc2ea96
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-lm24g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-319221                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-5hx2c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-319221             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-319221    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-gc4zh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-319221             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fzndb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kvztt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-319221 event: Registered Node default-k8s-diff-port-319221 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-319221 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-319221 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-319221 event: Registered Node default-k8s-diff-port-319221 in Controller
	
	
	==> dmesg <==
	[Oct14 20:46] overlayfs: idmapped layers are currently not supported
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b9318db856fe0d0f6a7117085a0adce0db784142ed7b06b332460753d08e8ae] <==
	{"level":"warn","ts":"2025-10-14T21:17:49.746236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.767612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.782326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.805742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.835494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.846650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.863856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.884781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.898057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.913302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.931797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.947761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.964833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.981176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:49.998023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.026504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.040566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.058708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.076098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.110715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.161489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.196884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.211452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.233581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:17:50.292897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:49 up  4:01,  0 user,  load average: 3.64, 2.52, 1.98
	Linux default-k8s-diff-port-319221 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f928c3872da21104d13ae2b485be1678e00a6f2bd1882dce4335b5c5bc58ef20] <==
	I1014 21:17:52.077803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:17:52.078822       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:17:52.079001       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:17:52.079119       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:17:52.079175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:17:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:17:52.315351       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:17:52.315458       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:17:52.315493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:17:52.318531       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:18:22.315344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:18:22.318518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:18:22.318617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:18:22.318709       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1014 21:18:23.917895       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:18:23.917986       1 metrics.go:72] Registering metrics
	I1014 21:18:23.918077       1 controller.go:711] "Syncing nftables rules"
	I1014 21:18:32.312957       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:18:32.313008       1 main.go:301] handling current node
	I1014 21:18:42.317923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:18:42.317961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b6a360756e087c2443b2b0e2d881459ae06af4340a783adc594f403a6a0e4a7] <==
	I1014 21:17:51.192509       1 policy_source.go:240] refreshing policies
	I1014 21:17:51.192736       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:17:51.198899       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:17:51.219526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:17:51.219594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:17:51.264587       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:17:51.267582       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:17:51.279090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:17:51.279399       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:17:51.279429       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:17:51.279437       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:17:51.288877       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:17:51.288975       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:17:51.328990       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:17:51.600715       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:17:51.988516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:17:52.184686       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:17:52.336052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:17:52.373023       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:17:52.385498       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:17:52.491377       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.85.52"}
	I1014 21:17:52.511617       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.125.45"}
	I1014 21:17:54.640687       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:17:55.041527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:17:55.093729       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [21c6e7b7c3b753bafef79064f5b38d7a56991049b34f4cd0684b2a1c8ffdd1c6] <==
	I1014 21:17:54.543988       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:17:54.546482       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:17:54.546976       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:17:54.548215       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 21:17:54.551470       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:17:54.553655       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:17:54.553730       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:17:54.553737       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:17:54.553743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 21:17:54.557460       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:17:54.557565       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:17:54.559319       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:17:54.561721       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1014 21:17:54.564198       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:17:54.567494       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:17:54.568742       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:17:54.569914       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:17:54.573190       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:17:54.583659       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:17:54.583726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:17:54.583785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:17:54.583852       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:17:54.584290       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:17:54.584884       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:17:54.602571       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [06faac1a4580df264944b69244b55b6cef6ece45b1cdd6bf0da2440fcb694194] <==
	I1014 21:17:52.522402       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:17:52.622291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:17:52.722380       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:17:52.722480       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:17:52.722577       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:17:52.740218       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:17:52.740357       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:17:52.743867       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:17:52.744171       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:17:52.744239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:17:52.747732       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:17:52.747803       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:17:52.748124       1 config.go:200] "Starting service config controller"
	I1014 21:17:52.748192       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:17:52.748692       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:17:52.748740       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:17:52.749198       1 config.go:309] "Starting node config controller"
	I1014 21:17:52.749245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:17:52.749274       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:17:52.848829       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:17:52.848812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:17:52.848853       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c76dda25418267dfa4c7d4626489bfaeb9c07ffefc2274e1731308f9b5899ab2] <==
	I1014 21:17:48.354929       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:17:51.129644       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:17:51.129689       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:17:51.129704       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:17:51.129711       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:17:51.261950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:17:51.261981       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:17:51.267320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:17:51.272406       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:17:51.273376       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:17:51.273436       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:17:51.373226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:17:55 default-k8s-diff-port-319221 kubelet[780]: W1014 21:17:55.504517     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062 WatchSource:0}: Error finding container a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062: Status 404 returned error can't find the container with id a87bcdfa16ba4059c671c83887f46661718dce987363382be494b34965198062
	Oct 14 21:17:55 default-k8s-diff-port-319221 kubelet[780]: W1014 21:17:55.514405     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b WatchSource:0}: Error finding container 455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b: Status 404 returned error can't find the container with id 455839f49512b3f411cb0143c25930918e3ce40aa621e91eda41cfc09c09470b
	Oct 14 21:18:00 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:00.561640     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 21:18:00 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:00.796164     780 scope.go:117] "RemoveContainer" containerID="ba6c1d7b3b544f7f7b5f38844782ff3b9db297b3b67aa844e397f62bc32dbf94"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:01.804202     780 scope.go:117] "RemoveContainer" containerID="ba6c1d7b3b544f7f7b5f38844782ff3b9db297b3b67aa844e397f62bc32dbf94"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:01.804973     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:01 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:01.805257     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:02 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:02.825675     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:02 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:02.829347     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:05 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:05.474955     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:05 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:05.475132     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:20 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:20.689208     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:20 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:20.871201     780 scope.go:117] "RemoveContainer" containerID="fd4d31ec998842bbcb345d3d43ed7d6eb7cffac54c531548ae712c7d39a4c093"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:21.875157     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:21.875761     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:21 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:21.896853     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvztt" podStartSLOduration=15.031349991 podStartE2EDuration="26.895418972s" podCreationTimestamp="2025-10-14 21:17:55 +0000 UTC" firstStartedPulling="2025-10-14 21:17:55.517436255 +0000 UTC m=+11.165205214" lastFinishedPulling="2025-10-14 21:18:07.381505237 +0000 UTC m=+23.029274195" observedRunningTime="2025-10-14 21:18:07.862759818 +0000 UTC m=+23.510528785" watchObservedRunningTime="2025-10-14 21:18:21.895418972 +0000 UTC m=+37.543187930"
	Oct 14 21:18:22 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:22.882078     780 scope.go:117] "RemoveContainer" containerID="2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c"
	Oct 14 21:18:25 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:25.474545     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:25 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:25.475202     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:37 default-k8s-diff-port-319221 kubelet[780]: I1014 21:18:37.689954     780 scope.go:117] "RemoveContainer" containerID="f0c2c88aa3e33c7438999ff7823b3abf4b950c125e431b49ba13b42b0e840572"
	Oct 14 21:18:37 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:37.690150     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fzndb_kubernetes-dashboard(4926e00b-dc09-4ea2-870b-1cea1a75ba3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fzndb" podUID="4926e00b-dc09-4ea2-870b-1cea1a75ba3a"
	Oct 14 21:18:44 default-k8s-diff-port-319221 kubelet[780]: E1014 21:18:44.821532     780 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/09b53eb1fb5a0a5f2c1e411b0fe869e3a97e38d2a7734b8294f6af0fa69b01d3/crio-d6007dc5b48d505a48ea6d8ce98ae4cf547a248f9e8df04f475d8baa1875295b\": RecentStats: unable to find data in memory cache]"
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:18:44 default-k8s-diff-port-319221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7fe5f8da58a66adccb01bb7fd806b32a9fe61f3e4fa8aa6b71649dc014f1c0a] <==
	2025/10/14 21:18:07 Using namespace: kubernetes-dashboard
	2025/10/14 21:18:07 Using in-cluster config to connect to apiserver
	2025/10/14 21:18:07 Using secret token for csrf signing
	2025/10/14 21:18:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:18:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:18:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:18:07 Generating JWE encryption key
	2025/10/14 21:18:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:18:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:18:07 Initializing JWE encryption key from synchronized object
	2025/10/14 21:18:07 Creating in-cluster Sidecar client
	2025/10/14 21:18:07 Serving insecurely on HTTP port: 9090
	2025/10/14 21:18:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:07 Starting overwatch
	
	
	==> storage-provisioner [2401b34dca3a489e7d8635faece0f20bdff99d068044275db0dd3354167b954c] <==
	I1014 21:17:52.085373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:18:22.103938       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f361b540f6ffbd7d9ffc03c5553e783416a8e0b539626e67992aa338f683445d] <==
	I1014 21:18:22.988548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:18:23.018664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:18:23.019022       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:18:23.026131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:26.481822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:30.742852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:34.340761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:37.394843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.416489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.424484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:18:40.424635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:18:40.424790       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7!
	I1014 21:18:40.425684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32eebcf8-6bdd-4075-bafa-c8aa6a4d9e02", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7 became leader
	W1014 21:18:40.434348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:40.438065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:18:40.525454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-319221_1c336939-6046-4f5d-96e3-90a54aab55e7!
	W1014 21:18:42.441308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:42.446269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:44.449587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:44.456797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:46.460114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:46.464991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:48.468851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:48.474071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221: exit status 2 (385.0122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-650951 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-650951 --alsologtostderr -v=1: exit status 80 (1.950918704s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-650951 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:19:03.835594  748338 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:19:03.835830  748338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:19:03.835859  748338 out.go:374] Setting ErrFile to fd 2...
	I1014 21:19:03.835878  748338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:19:03.836170  748338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:19:03.836541  748338 out.go:368] Setting JSON to false
	I1014 21:19:03.836617  748338 mustload.go:65] Loading cluster: embed-certs-650951
	I1014 21:19:03.837039  748338 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:19:03.837546  748338 cli_runner.go:164] Run: docker container inspect embed-certs-650951 --format={{.State.Status}}
	I1014 21:19:03.854870  748338 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:19:03.855210  748338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:19:03.931542  748338 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2025-10-14 21:19:03.921301025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:19:03.932396  748338 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-650951 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:19:03.936519  748338 out.go:179] * Pausing node embed-certs-650951 ... 
	I1014 21:19:03.939401  748338 host.go:66] Checking if "embed-certs-650951" exists ...
	I1014 21:19:03.939745  748338 ssh_runner.go:195] Run: systemctl --version
	I1014 21:19:03.939795  748338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-650951
	I1014 21:19:03.960304  748338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/embed-certs-650951/id_rsa Username:docker}
	I1014 21:19:04.067450  748338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:19:04.094759  748338 pause.go:52] kubelet running: true
	I1014 21:19:04.094828  748338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:19:04.419673  748338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:19:04.419752  748338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:19:04.508650  748338 cri.go:89] found id: "640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13"
	I1014 21:19:04.508722  748338 cri.go:89] found id: "9bd0bfe570cf69a832134e6d457e4570261271dfe99b30a8105ef7eff7981b88"
	I1014 21:19:04.508747  748338 cri.go:89] found id: "723dfd75d8534883b0a233c3f02128ece81fcf6a78b873b00b6e865a1aaaf248"
	I1014 21:19:04.508769  748338 cri.go:89] found id: "bd1a815a5e7132e6fea924ac0892c08444dc8492532d62c704c55ddb2bee3069"
	I1014 21:19:04.508807  748338 cri.go:89] found id: "1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7"
	I1014 21:19:04.508831  748338 cri.go:89] found id: "75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be"
	I1014 21:19:04.508851  748338 cri.go:89] found id: "26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637"
	I1014 21:19:04.508893  748338 cri.go:89] found id: "9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af"
	I1014 21:19:04.508917  748338 cri.go:89] found id: "af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c"
	I1014 21:19:04.508941  748338 cri.go:89] found id: "981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	I1014 21:19:04.508973  748338 cri.go:89] found id: "f8e20f3e5a5c85378adc47d0f5dc7dae779c420c3c2c99ef013b4e3e8a804df1"
	I1014 21:19:04.508995  748338 cri.go:89] found id: ""
	I1014 21:19:04.509077  748338 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:19:04.523020  748338 retry.go:31] will retry after 164.566043ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:19:04Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:19:04.688529  748338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:19:04.708849  748338 pause.go:52] kubelet running: false
	I1014 21:19:04.708976  748338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:19:04.947826  748338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:19:04.947963  748338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:19:05.041354  748338 cri.go:89] found id: "640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13"
	I1014 21:19:05.041431  748338 cri.go:89] found id: "9bd0bfe570cf69a832134e6d457e4570261271dfe99b30a8105ef7eff7981b88"
	I1014 21:19:05.041453  748338 cri.go:89] found id: "723dfd75d8534883b0a233c3f02128ece81fcf6a78b873b00b6e865a1aaaf248"
	I1014 21:19:05.041472  748338 cri.go:89] found id: "bd1a815a5e7132e6fea924ac0892c08444dc8492532d62c704c55ddb2bee3069"
	I1014 21:19:05.041506  748338 cri.go:89] found id: "1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7"
	I1014 21:19:05.041528  748338 cri.go:89] found id: "75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be"
	I1014 21:19:05.041549  748338 cri.go:89] found id: "26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637"
	I1014 21:19:05.041568  748338 cri.go:89] found id: "9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af"
	I1014 21:19:05.041599  748338 cri.go:89] found id: "af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c"
	I1014 21:19:05.041627  748338 cri.go:89] found id: "981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	I1014 21:19:05.041658  748338 cri.go:89] found id: "f8e20f3e5a5c85378adc47d0f5dc7dae779c420c3c2c99ef013b4e3e8a804df1"
	I1014 21:19:05.041690  748338 cri.go:89] found id: ""
	I1014 21:19:05.041776  748338 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:19:05.053045  748338 retry.go:31] will retry after 291.616315ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:19:05Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:19:05.345626  748338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:19:05.361184  748338 pause.go:52] kubelet running: false
	I1014 21:19:05.361309  748338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:19:05.557500  748338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:19:05.557634  748338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:19:05.660433  748338 cri.go:89] found id: "640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13"
	I1014 21:19:05.660504  748338 cri.go:89] found id: "9bd0bfe570cf69a832134e6d457e4570261271dfe99b30a8105ef7eff7981b88"
	I1014 21:19:05.660523  748338 cri.go:89] found id: "723dfd75d8534883b0a233c3f02128ece81fcf6a78b873b00b6e865a1aaaf248"
	I1014 21:19:05.660543  748338 cri.go:89] found id: "bd1a815a5e7132e6fea924ac0892c08444dc8492532d62c704c55ddb2bee3069"
	I1014 21:19:05.660577  748338 cri.go:89] found id: "1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7"
	I1014 21:19:05.660600  748338 cri.go:89] found id: "75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be"
	I1014 21:19:05.660620  748338 cri.go:89] found id: "26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637"
	I1014 21:19:05.660640  748338 cri.go:89] found id: "9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af"
	I1014 21:19:05.660670  748338 cri.go:89] found id: "af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c"
	I1014 21:19:05.660707  748338 cri.go:89] found id: "981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	I1014 21:19:05.660726  748338 cri.go:89] found id: "f8e20f3e5a5c85378adc47d0f5dc7dae779c420c3c2c99ef013b4e3e8a804df1"
	I1014 21:19:05.660759  748338 cri.go:89] found id: ""
	I1014 21:19:05.660843  748338 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:19:05.681686  748338 out.go:203] 
	W1014 21:19:05.684821  748338 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:19:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:19:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:19:05.684893  748338 out.go:285] * 
	* 
	W1014 21:19:05.695973  748338 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:19:05.699079  748338 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-650951 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-650951
helpers_test.go:243: (dbg) docker inspect embed-certs-650951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	        "Created": "2025-10-14T21:16:16.666518706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:17:57.875841001Z",
	            "FinishedAt": "2025-10-14T21:17:56.712241407Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749-json.log",
	        "Name": "/embed-certs-650951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-650951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-650951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	                "LowerDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-650951",
	                "Source": "/var/lib/docker/volumes/embed-certs-650951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-650951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-650951",
	                "name.minikube.sigs.k8s.io": "embed-certs-650951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a5bf0521513d5ee994fe65388b5e6808e66668310b89f59eef362bcab587425",
	            "SandboxKey": "/var/run/docker/netns/2a5bf0521513",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-650951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:e4:e6:c4:31:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dec8f1ebdf6a87cb6a25803785fed11070bcba6788531da683aec0eefde26c55",
	                    "EndpointID": "11d688700d5117d10782913dd7f50c2d96989d3656cde07893f98bbfd91b1e47",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-650951",
	                        "b2e56cd2dd30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951: exit status 2 (440.782627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25: (1.719095355s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:18:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:18:53.885943  746423 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:18:53.886116  746423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:53.886126  746423 out.go:374] Setting ErrFile to fd 2...
	I1014 21:18:53.886131  746423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:53.886491  746423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:18:53.887020  746423 out.go:368] Setting JSON to false
	I1014 21:18:53.888022  746423 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14484,"bootTime":1760462250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:18:53.888122  746423 start.go:141] virtualization:  
	I1014 21:18:53.892083  746423 out.go:179] * [no-preload-970287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:18:53.896149  746423 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:18:53.896271  746423 notify.go:220] Checking for updates...
	I1014 21:18:53.902310  746423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:18:53.905265  746423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:53.908172  746423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:18:53.911146  746423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:18:53.914136  746423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:18:53.917471  746423 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:53.917596  746423 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:18:53.952602  746423 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:18:53.952745  746423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:18:54.021618  746423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:18:54.010863537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:18:54.021731  746423 docker.go:318] overlay module found
	I1014 21:18:54.024997  746423 out.go:179] * Using the docker driver based on user configuration
	I1014 21:18:54.027846  746423 start.go:305] selected driver: docker
	I1014 21:18:54.027874  746423 start.go:925] validating driver "docker" against <nil>
	I1014 21:18:54.027890  746423 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:18:54.028833  746423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:18:54.090242  746423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:18:54.079264405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:18:54.090408  746423 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:18:54.090652  746423 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:18:54.093743  746423 out.go:179] * Using Docker driver with root privileges
	I1014 21:18:54.096607  746423 cni.go:84] Creating CNI manager for ""
	I1014 21:18:54.096688  746423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:18:54.096701  746423 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:18:54.096783  746423 start.go:349] cluster config:
	{Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:18:54.099997  746423 out.go:179] * Starting "no-preload-970287" primary control-plane node in "no-preload-970287" cluster
	I1014 21:18:54.103029  746423 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:18:54.105902  746423 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:18:54.109111  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:18:54.109263  746423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:18:54.109300  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json: {Name:mk94c562bb839278790e95c3cb5b92738df36261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:54.109584  746423 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:18:54.109991  746423 cache.go:107] acquiring lock: {Name:mk867258823b28c0a6577fb68ee404726b134331 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110067  746423 cache.go:115] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 21:18:54.110075  746423 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.6µs
	I1014 21:18:54.110096  746423 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 21:18:54.110109  746423 cache.go:107] acquiring lock: {Name:mk02685c74746afe884538e00ccba24a205a7a97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110216  746423 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:18:54.110447  746423 cache.go:107] acquiring lock: {Name:mkfd85b5cd748a277287b0e4a5c32afde0323ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110561  746423 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:18:54.110692  746423 cache.go:107] acquiring lock: {Name:mk61dc7cd6219ec2f627a7ba4a170447679fd40f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110795  746423 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:18:54.110909  746423 cache.go:107] acquiring lock: {Name:mk06b01febd0a216a260e0d94f0882d94b3570e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111003  746423 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:18:54.111130  746423 cache.go:107] acquiring lock: {Name:mkd93b76296abb1739672d82097889947058106c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111228  746423 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1014 21:18:54.111377  746423 cache.go:107] acquiring lock: {Name:mk11554cc008e9d39e9e0a5de0ae2783f48765f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111463  746423 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:18:54.111584  746423 cache.go:107] acquiring lock: {Name:mk431d2046da2d67f2357e06c8f8bd43155a4f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111697  746423 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:18:54.120529  746423 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1014 21:18:54.121122  746423 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:18:54.121278  746423 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:18:54.121896  746423 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:18:54.121918  746423 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:18:54.122608  746423 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:18:54.123044  746423 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:18:54.139038  746423 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:18:54.139071  746423 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:18:54.139088  746423 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:18:54.139115  746423 start.go:360] acquireMachinesLock for no-preload-970287: {Name:mk73033dcc861b0fd6c8055acaffbf9362e5f942 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.139216  746423 start.go:364] duration metric: took 85.13µs to acquireMachinesLock for "no-preload-970287"
	I1014 21:18:54.139242  746423 start.go:93] Provisioning new machine with config: &{Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:18:54.139301  746423 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:18:54.142966  746423 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:18:54.143216  746423 start.go:159] libmachine.API.Create for "no-preload-970287" (driver="docker")
	I1014 21:18:54.143261  746423 client.go:168] LocalClient.Create starting
	I1014 21:18:54.143350  746423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:18:54.143386  746423 main.go:141] libmachine: Decoding PEM data...
	I1014 21:18:54.143404  746423 main.go:141] libmachine: Parsing certificate...
	I1014 21:18:54.143460  746423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:18:54.143482  746423 main.go:141] libmachine: Decoding PEM data...
	I1014 21:18:54.143491  746423 main.go:141] libmachine: Parsing certificate...
	I1014 21:18:54.143871  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:18:54.168678  746423 cli_runner.go:211] docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:18:54.168763  746423 network_create.go:284] running [docker network inspect no-preload-970287] to gather additional debugging logs...
	I1014 21:18:54.168785  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287
	W1014 21:18:54.185805  746423 cli_runner.go:211] docker network inspect no-preload-970287 returned with exit code 1
	I1014 21:18:54.185837  746423 network_create.go:287] error running [docker network inspect no-preload-970287]: docker network inspect no-preload-970287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-970287 not found
	I1014 21:18:54.185864  746423 network_create.go:289] output of [docker network inspect no-preload-970287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-970287 not found
	
	** /stderr **
	I1014 21:18:54.185963  746423 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:18:54.214171  746423 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:18:54.214511  746423 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:18:54.214871  746423 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:18:54.215368  746423 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ca53a0}
	I1014 21:18:54.215403  746423 network_create.go:124] attempt to create docker network no-preload-970287 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:18:54.215507  746423 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-970287 no-preload-970287
	I1014 21:18:54.293184  746423 network_create.go:108] docker network no-preload-970287 192.168.76.0/24 created
	I1014 21:18:54.293212  746423 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-970287" container
	I1014 21:18:54.293304  746423 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:18:54.310533  746423 cli_runner.go:164] Run: docker volume create no-preload-970287 --label name.minikube.sigs.k8s.io=no-preload-970287 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:18:54.331185  746423 oci.go:103] Successfully created a docker volume no-preload-970287
	I1014 21:18:54.331299  746423 cli_runner.go:164] Run: docker run --rm --name no-preload-970287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970287 --entrypoint /usr/bin/test -v no-preload-970287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:18:54.461429  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1014 21:18:54.481316  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1014 21:18:54.481836  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1014 21:18:54.483701  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1014 21:18:54.485388  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1014 21:18:54.492475  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1014 21:18:54.494375  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1014 21:18:54.556875  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1014 21:18:54.556946  746423 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 445.837995ms
	I1014 21:18:54.556976  746423 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1014 21:18:55.005852  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1014 21:18:55.005884  746423 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 894.975852ms
	I1014 21:18:55.005909  746423 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1014 21:18:55.057028  746423 oci.go:107] Successfully prepared a docker volume no-preload-970287
	I1014 21:18:55.057070  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1014 21:18:55.057218  746423 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:18:55.057382  746423 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:18:55.126561  746423 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-970287 --name no-preload-970287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-970287 --network no-preload-970287 --ip 192.168.76.2 --volume no-preload-970287:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:18:55.410188  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1014 21:18:55.410222  746423 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.299531785s
	I1014 21:18:55.410239  746423 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1014 21:18:55.502459  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1014 21:18:55.502529  746423 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.39094912s
	I1014 21:18:55.502556  746423 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1014 21:18:55.517095  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1014 21:18:55.517153  746423 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.407042773s
	I1014 21:18:55.517166  746423 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1014 21:18:55.532074  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1014 21:18:55.532102  746423 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.421658234s
	I1014 21:18:55.532114  746423 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1014 21:18:55.623868  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Running}}
	I1014 21:18:55.647427  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:55.681004  746423 cli_runner.go:164] Run: docker exec no-preload-970287 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:18:55.755589  746423 oci.go:144] the created container "no-preload-970287" has a running status.
	I1014 21:18:55.755637  746423 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa...
	I1014 21:18:56.822139  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1014 21:18:56.822166  746423 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.710791483s
	I1014 21:18:56.822177  746423 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1014 21:18:56.822189  746423 cache.go:87] Successfully saved all images to host disk.
	I1014 21:18:57.315629  746423 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:18:57.337505  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:57.355191  746423 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:18:57.355215  746423 kic_runner.go:114] Args: [docker exec --privileged no-preload-970287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:18:57.396763  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:57.414943  746423 machine.go:93] provisionDockerMachine start ...
	I1014 21:18:57.415042  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.437624  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.437975  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.437990  746423 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:18:57.584052  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:18:57.584075  746423 ubuntu.go:182] provisioning hostname "no-preload-970287"
	I1014 21:18:57.584136  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.603133  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.603442  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.603453  746423 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-970287 && echo "no-preload-970287" | sudo tee /etc/hostname
	I1014 21:18:57.759403  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:18:57.759534  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.777495  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.777800  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.777821  746423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970287/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:18:57.920601  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:18:57.920629  746423 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:18:57.920677  746423 ubuntu.go:190] setting up certificates
	I1014 21:18:57.920688  746423 provision.go:84] configureAuth start
	I1014 21:18:57.920763  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:57.938434  746423 provision.go:143] copyHostCerts
	I1014 21:18:57.938510  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:18:57.938525  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:18:57.938608  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:18:57.938718  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:18:57.938732  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:18:57.938761  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:18:57.938842  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:18:57.938852  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:18:57.938877  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:18:57.938929  746423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.no-preload-970287 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970287]
	I1014 21:18:58.224164  746423 provision.go:177] copyRemoteCerts
	I1014 21:18:58.224234  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:18:58.224285  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.242237  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:58.344024  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:18:58.365704  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:18:58.383225  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 21:18:58.400836  746423 provision.go:87] duration metric: took 480.130009ms to configureAuth
	I1014 21:18:58.400864  746423 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:18:58.401048  746423 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:58.401166  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.418465  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:58.418792  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:58.418813  746423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:18:58.755328  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:18:58.755397  746423 machine.go:96] duration metric: took 1.34042981s to provisionDockerMachine
	I1014 21:18:58.755421  746423 client.go:171] duration metric: took 4.612152806s to LocalClient.Create
	I1014 21:18:58.755457  746423 start.go:167] duration metric: took 4.612246765s to libmachine.API.Create "no-preload-970287"
	I1014 21:18:58.755480  746423 start.go:293] postStartSetup for "no-preload-970287" (driver="docker")
	I1014 21:18:58.755511  746423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:18:58.755597  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:18:58.755674  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.772812  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:58.876836  746423 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:18:58.880257  746423 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:18:58.880284  746423 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:18:58.880295  746423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:18:58.880376  746423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:18:58.880466  746423 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:18:58.880580  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:18:58.888577  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:58.908626  746423 start.go:296] duration metric: took 153.112575ms for postStartSetup
	I1014 21:18:58.908986  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:58.925607  746423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:18:58.925893  746423 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:18:58.925941  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.942834  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.045763  746423 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:18:59.050588  746423 start.go:128] duration metric: took 4.911270597s to createHost
	I1014 21:18:59.050615  746423 start.go:83] releasing machines lock for "no-preload-970287", held for 4.91138936s
	I1014 21:18:59.050686  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:59.068563  746423 ssh_runner.go:195] Run: cat /version.json
	I1014 21:18:59.068631  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:59.068878  746423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:18:59.068954  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:59.088454  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.102129  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.187922  746423 ssh_runner.go:195] Run: systemctl --version
	I1014 21:18:59.310340  746423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:18:59.351789  746423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:18:59.356262  746423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:18:59.356359  746423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:18:59.386651  746423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:18:59.386673  746423 start.go:495] detecting cgroup driver to use...
	I1014 21:18:59.386705  746423 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:18:59.386768  746423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:18:59.404092  746423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:18:59.416693  746423 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:18:59.416752  746423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:18:59.434373  746423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:18:59.453817  746423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:18:59.582792  746423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:18:59.704343  746423 docker.go:234] disabling docker service ...
	I1014 21:18:59.704410  746423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:18:59.726268  746423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:18:59.739297  746423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:18:59.853316  746423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:18:59.985135  746423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:18:59.998106  746423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:19:00.072842  746423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:19:00.072990  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.107718  746423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:19:00.107850  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.140946  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.189159  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.235791  746423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:19:00.257249  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.281928  746423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.301541  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.342304  746423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:19:00.354925  746423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:19:00.371807  746423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:00.515628  746423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:19:00.642741  746423 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:19:00.642852  746423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:19:00.646755  746423 start.go:563] Will wait 60s for crictl version
	I1014 21:19:00.646858  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:00.650329  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:19:00.682991  746423 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:19:00.683118  746423 ssh_runner.go:195] Run: crio --version
	I1014 21:19:00.712211  746423 ssh_runner.go:195] Run: crio --version
	I1014 21:19:00.744661  746423 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:19:00.747514  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:19:00.768655  746423 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:19:00.772562  746423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:19:00.782216  746423 kubeadm.go:883] updating cluster {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:19:00.782321  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:19:00.782365  746423 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:19:00.806370  746423 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 21:19:00.806396  746423 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 21:19:00.806457  746423 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:00.806667  746423 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:00.806782  746423 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:00.806876  746423 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:00.806973  746423 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:00.807074  746423 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1014 21:19:00.807178  746423 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:00.807273  746423 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:00.809023  746423 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:00.809074  746423 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:00.809132  746423 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:00.809023  746423 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:00.809250  746423 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:00.809294  746423 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:00.809336  746423 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1014 21:19:00.809382  746423 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:01.042719  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.063532  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.087387  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.088081  746423 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1014 21:19:01.088137  746423 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.088199  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.088782  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.102238  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.102328  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1014 21:19:01.102579  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.143929  746423 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1014 21:19:01.144032  746423 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.144133  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.216563  746423 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1014 21:19:01.216650  746423 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.216715  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.223351  746423 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1014 21:19:01.223532  746423 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.223589  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.223503  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.254557  746423 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1014 21:19:01.254641  746423 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.254711  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.283446  746423 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1014 21:19:01.283549  746423 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1014 21:19:01.283610  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.288890  746423 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1014 21:19:01.288944  746423 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.289042  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.289190  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.289283  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.289415  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.289443  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.289521  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.290415  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.387437  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.387549  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.400867  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.401012  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.401108  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.401196  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.401314  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.473889  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.474014  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.537893  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.537994  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.538027  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1014 21:19:01.538324  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1014 21:19:01.538076  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.538206  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.543605  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1014 21:19:01.543757  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:01.543866  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.637578  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1014 21:19:01.637682  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1014 21:19:01.637758  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1014 21:19:01.637804  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1014 21:19:01.637857  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1014 21:19:01.637902  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1014 21:19:01.637956  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1014 21:19:01.637969  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1014 21:19:01.638025  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1014 21:19:01.638075  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1014 21:19:01.638145  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1014 21:19:01.638194  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1014 21:19:01.638262  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1014 21:19:01.638277  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1014 21:19:01.702143  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1014 21:19:01.702226  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1014 21:19:01.702317  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1014 21:19:01.702350  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1014 21:19:01.702421  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1014 21:19:01.702452  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1014 21:19:01.702537  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1014 21:19:01.702567  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1014 21:19:01.702646  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1014 21:19:01.702675  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1014 21:19:01.865676  746423 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1014 21:19:01.865773  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1014 21:19:02.133004  746423 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 21:19:02.133210  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:02.312130  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1014 21:19:02.312162  746423 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:02.312303  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:02.341206  746423 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1014 21:19:02.341252  746423 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:02.341402  746423 ssh_runner.go:195] Run: which crictl
	
	
	==> CRI-O <==
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.183987427Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fe9ec24a-1e4e-41f3-9394-489332967f4f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.184930242Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a06c3b40-4e2c-44df-8751-bf9fc94afa99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.185157414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.204856201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205075874Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc2c3cfcd010e507d4448b2c09e2bae5594daaa69b03f658d9993697f67458c4/merged/etc/passwd: no such file or directory"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205106882Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc2c3cfcd010e507d4448b2c09e2bae5594daaa69b03f658d9993697f67458c4/merged/etc/group: no such file or directory"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205432058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.238456843Z" level=info msg="Created container 640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13: kube-system/storage-provisioner/storage-provisioner" id=a06c3b40-4e2c-44df-8751-bf9fc94afa99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.24068085Z" level=info msg="Starting container: 640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13" id=9fe3f59b-d7e9-449d-be89-d88fb2a0f224 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.243810575Z" level=info msg="Started container" PID=1647 containerID=640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13 description=kube-system/storage-provisioner/storage-provisioner id=9fe3f59b-d7e9-449d-be89-d88fb2a0f224 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aabfd233af99d5d40db7b81400259b2b3235088466ab85949bf3965c7d396d73
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.685548196Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690622733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690662316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690682189Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.694770178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.694961222Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.695044194Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.699822151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.699973334Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.700044736Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710243027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710427343Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710548674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.713782697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.713933723Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	640b830bec549       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   aabfd233af99d       storage-provisioner                          kube-system
	981824df30820       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   26de3a92b7d29       dashboard-metrics-scraper-6ffb444bf9-shfk2   kubernetes-dashboard
	f8e20f3e5a5c8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   beec1b676f773       kubernetes-dashboard-855c9754f9-7m6nl        kubernetes-dashboard
	6353f34d398a5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   4edf27967113a       busybox                                      default
	9bd0bfe570cf6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   8ee6cc8890364       coredns-66bc5c9577-sllzf                     kube-system
	723dfd75d8534       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   101e4a51f7192       kindnet-x2jcr                                kube-system
	bd1a815a5e713       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   2ce7335ca1136       kube-proxy-2xsjs                             kube-system
	1ec4896f4fa53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   aabfd233af99d       storage-provisioner                          kube-system
	75ec260b360ce       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bf9b0ada461dd       kube-controller-manager-embed-certs-650951   kube-system
	26600dc7ca36c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d5d16aaf738ee       etcd-embed-certs-650951                      kube-system
	9a14bf7659c1b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c3f0ff7ac40fd       kube-apiserver-embed-certs-650951            kube-system
	af87a8da5c171       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5052fc90a0dc6       kube-scheduler-embed-certs-650951            kube-system
	
	
	==> coredns [9bd0bfe570cf69a832134e6d457e4570261271dfe99b30a8105ef7eff7981b88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39536 - 62945 "HINFO IN 2808594643827052679.7558689821458387517. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024777895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-650951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-650951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=embed-certs-650951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-650951
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:19:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:17:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-650951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcbe15b75c40400db85e1834bebb59e4
	  System UUID:                27698974-6461-4c41-a998-671addfa4ff7
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-sllzf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-650951                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-x2jcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-650951             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-650951    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-2xsjs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-650951             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-shfk2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7m6nl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-650951 event: Registered Node embed-certs-650951 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-650951 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 62s)      kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 62s)      kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 62s)      kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-650951 event: Registered Node embed-certs-650951 in Controller
	
	
	==> dmesg <==
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637] <==
	{"level":"warn","ts":"2025-10-14T21:18:10.585347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.607017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.630444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.644875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.704904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.717906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.727617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.751408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.766921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.786466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.822988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.837855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.861011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.883908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.934882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.975047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.988946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.008621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.026146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.044513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.074442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.095981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.111750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.127141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.181455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51490","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:19:07 up  4:01,  0 user,  load average: 4.47, 2.78, 2.08
	Linux embed-certs-650951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [723dfd75d8534883b0a233c3f02128ece81fcf6a78b873b00b6e865a1aaaf248] <==
	I1014 21:18:13.500049       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:18:13.500270       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:18:13.500417       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:18:13.500428       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:18:13.500442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:18:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:18:13.683106       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:18:13.683900       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:18:13.684006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:18:13.684244       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:18:43.683715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:18:43.683840       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:18:43.683850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:18:43.683953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:18:45.184172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:18:45.184216       1 metrics.go:72] Registering metrics
	I1014 21:18:45.184500       1 controller.go:711] "Syncing nftables rules"
	I1014 21:18:53.684418       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:18:53.684554       1 main.go:301] handling current node
	I1014 21:19:03.688377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:19:03.688411       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af] <==
	I1014 21:18:12.187617       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:18:12.205945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:18:12.217840       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:18:12.241178       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:18:12.247503       1 cache.go:39] Caches are synced for autoregister controller
	I1014 21:18:12.260772       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 21:18:12.260806       1 policy_source.go:240] refreshing policies
	I1014 21:18:12.261067       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:18:12.289442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:18:12.297179       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:18:12.297209       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:18:12.307478       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:18:12.327248       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1014 21:18:12.381915       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:18:12.760608       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:18:12.800687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:18:12.815772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:18:12.886886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:18:12.899381       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:18:12.912748       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:18:13.094639       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.245.117"}
	I1014 21:18:13.126960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.244.101"}
	I1014 21:18:15.439127       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:18:15.641586       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:18:15.847692       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be] <==
	I1014 21:18:15.268247       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:18:15.270042       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:18:15.271166       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:18:15.275463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:18:15.277544       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:18:15.279734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:18:15.282114       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:18:15.282242       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 21:18:15.282311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:18:15.282473       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:18:15.282522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:18:15.282546       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:18:15.283574       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:18:15.283658       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:18:15.285104       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:18:15.303861       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:18:15.303916       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:18:15.303936       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:18:15.303941       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:18:15.303946       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:18:15.307558       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:18:15.307656       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:18:15.307731       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-650951"
	I1014 21:18:15.307801       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 21:18:15.853063       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bd1a815a5e7132e6fea924ac0892c08444dc8492532d62c704c55ddb2bee3069] <==
	I1014 21:18:13.561949       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:18:13.665885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:18:13.766574       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:18:13.766623       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:18:13.766798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:18:13.790832       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:18:13.790891       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:18:13.794700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:18:13.795012       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:18:13.795037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:18:13.796242       1 config.go:200] "Starting service config controller"
	I1014 21:18:13.796446       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:18:13.799744       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:18:13.799818       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:18:13.799860       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:18:13.799889       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:18:13.800699       1 config.go:309] "Starting node config controller"
	I1014 21:18:13.800767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:18:13.800796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:18:13.904514       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:18:13.904609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:18:13.904697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c] <==
	I1014 21:18:09.411153       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:18:12.038650       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:18:12.038685       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:18:12.038695       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:18:12.038702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:18:12.227353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:18:12.227387       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:18:12.233589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:18:12.234951       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:18:12.236370       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:18:12.236454       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:18:12.339009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:18:15 embed-certs-650951 kubelet[777]: I1014 21:18:15.927368     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d573b28f-976d-41c7-8f99-865f3358595f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-shfk2\" (UID: \"d573b28f-976d-41c7-8f99-865f3358595f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2"
	Oct 14 21:18:16 embed-certs-650951 kubelet[777]: W1014 21:18:16.120076     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a WatchSource:0}: Error finding container 26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a: Status 404 returned error can't find the container with id 26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a
	Oct 14 21:18:16 embed-certs-650951 kubelet[777]: W1014 21:18:16.137839     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff WatchSource:0}: Error finding container beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff: Status 404 returned error can't find the container with id beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff
	Oct 14 21:18:19 embed-certs-650951 kubelet[777]: I1014 21:18:19.927004     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 21:18:21 embed-certs-650951 kubelet[777]: I1014 21:18:21.099662     777 scope.go:117] "RemoveContainer" containerID="7c76f8d1d581943659cf7a8ab777e16740a06133d5d29e3d85cab16bf098624d"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: I1014 21:18:22.105481     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: E1014 21:18:22.105926     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: I1014 21:18:22.107191     777 scope.go:117] "RemoveContainer" containerID="7c76f8d1d581943659cf7a8ab777e16740a06133d5d29e3d85cab16bf098624d"
	Oct 14 21:18:23 embed-certs-650951 kubelet[777]: I1014 21:18:23.122876     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:23 embed-certs-650951 kubelet[777]: E1014 21:18:23.123896     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:26 embed-certs-650951 kubelet[777]: I1014 21:18:26.085122     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:26 embed-certs-650951 kubelet[777]: E1014 21:18:26.085794     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:40 embed-certs-650951 kubelet[777]: I1014 21:18:40.971009     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.169142     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.169350     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: E1014 21:18:41.169543     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.193453     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m6nl" podStartSLOduration=16.593005132000002 podStartE2EDuration="26.19163666s" podCreationTimestamp="2025-10-14 21:18:15 +0000 UTC" firstStartedPulling="2025-10-14 21:18:16.141599324 +0000 UTC m=+10.499880430" lastFinishedPulling="2025-10-14 21:18:25.740230834 +0000 UTC m=+20.098511958" observedRunningTime="2025-10-14 21:18:26.151637756 +0000 UTC m=+20.509918879" watchObservedRunningTime="2025-10-14 21:18:41.19163666 +0000 UTC m=+35.549917775"
	Oct 14 21:18:44 embed-certs-650951 kubelet[777]: I1014 21:18:44.182238     777 scope.go:117] "RemoveContainer" containerID="1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7"
	Oct 14 21:18:46 embed-certs-650951 kubelet[777]: I1014 21:18:46.085032     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:18:46 embed-certs-650951 kubelet[777]: E1014 21:18:46.085218     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:19:00 embed-certs-650951 kubelet[777]: I1014 21:19:00.970240     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:19:00 embed-certs-650951 kubelet[777]: E1014 21:19:00.970407     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f8e20f3e5a5c85378adc47d0f5dc7dae779c420c3c2c99ef013b4e3e8a804df1] <==
	2025/10/14 21:18:25 Using namespace: kubernetes-dashboard
	2025/10/14 21:18:25 Using in-cluster config to connect to apiserver
	2025/10/14 21:18:25 Using secret token for csrf signing
	2025/10/14 21:18:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:18:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:18:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:18:25 Generating JWE encryption key
	2025/10/14 21:18:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:18:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:18:26 Initializing JWE encryption key from synchronized object
	2025/10/14 21:18:26 Creating in-cluster Sidecar client
	2025/10/14 21:18:26 Serving insecurely on HTTP port: 9090
	2025/10/14 21:18:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:25 Starting overwatch
	
	
	==> storage-provisioner [1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7] <==
	I1014 21:18:13.547176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:18:43.549705       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13] <==
	I1014 21:18:44.264444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:18:44.276905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:18:44.276966       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:18:44.279202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:47.743458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:52.005690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:55.604287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:58.658145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.680270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.685322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:19:01.685560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:19:01.685757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db!
	I1014 21:19:01.686768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e765fc2b-374f-4cc2-922a-737fb3491cd4", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db became leader
	W1014 21:19:01.692488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.701680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:19:01.786223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db!
	W1014 21:19:03.706188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:03.716781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:05.726063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:05.733179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:07.737745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:07.745950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-650951 -n embed-certs-650951: exit status 2 (539.334545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-650951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-650951
helpers_test.go:243: (dbg) docker inspect embed-certs-650951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	        "Created": "2025-10-14T21:16:16.666518706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:17:57.875841001Z",
	            "FinishedAt": "2025-10-14T21:17:56.712241407Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749-json.log",
	        "Name": "/embed-certs-650951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-650951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-650951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749",
	                "LowerDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11159d7ae5bf1eee965b74cca31aeff653ffd76d7ceea8b33383d89124ef69e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-650951",
	                "Source": "/var/lib/docker/volumes/embed-certs-650951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-650951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-650951",
	                "name.minikube.sigs.k8s.io": "embed-certs-650951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a5bf0521513d5ee994fe65388b5e6808e66668310b89f59eef362bcab587425",
	            "SandboxKey": "/var/run/docker/netns/2a5bf0521513",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-650951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:e4:e6:c4:31:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dec8f1ebdf6a87cb6a25803785fed11070bcba6788531da683aec0eefde26c55",
	                    "EndpointID": "11d688700d5117d10782913dd7f50c2d96989d3656cde07893f98bbfd91b1e47",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-650951",
	                        "b2e56cd2dd30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951: exit status 2 (479.002175ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-650951 logs -n 25: (1.516973506s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:14 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ delete  │ -p cert-expiration-341199                                                                                                                                                                                                                     │ cert-expiration-341199       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:17 UTC │
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:18:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:18:53.885943  746423 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:18:53.886116  746423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:53.886126  746423 out.go:374] Setting ErrFile to fd 2...
	I1014 21:18:53.886131  746423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:18:53.886491  746423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:18:53.887020  746423 out.go:368] Setting JSON to false
	I1014 21:18:53.888022  746423 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14484,"bootTime":1760462250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:18:53.888122  746423 start.go:141] virtualization:  
	I1014 21:18:53.892083  746423 out.go:179] * [no-preload-970287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:18:53.896149  746423 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:18:53.896271  746423 notify.go:220] Checking for updates...
	I1014 21:18:53.902310  746423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:18:53.905265  746423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:18:53.908172  746423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:18:53.911146  746423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:18:53.914136  746423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:18:53.917471  746423 config.go:182] Loaded profile config "embed-certs-650951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:53.917596  746423 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:18:53.952602  746423 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:18:53.952745  746423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:18:54.021618  746423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:18:54.010863537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:18:54.021731  746423 docker.go:318] overlay module found
	I1014 21:18:54.024997  746423 out.go:179] * Using the docker driver based on user configuration
	I1014 21:18:54.027846  746423 start.go:305] selected driver: docker
	I1014 21:18:54.027874  746423 start.go:925] validating driver "docker" against <nil>
	I1014 21:18:54.027890  746423 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:18:54.028833  746423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:18:54.090242  746423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:18:54.079264405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:18:54.090408  746423 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:18:54.090652  746423 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:18:54.093743  746423 out.go:179] * Using Docker driver with root privileges
	I1014 21:18:54.096607  746423 cni.go:84] Creating CNI manager for ""
	I1014 21:18:54.096688  746423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:18:54.096701  746423 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:18:54.096783  746423 start.go:349] cluster config:
	{Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:18:54.099997  746423 out.go:179] * Starting "no-preload-970287" primary control-plane node in "no-preload-970287" cluster
	I1014 21:18:54.103029  746423 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:18:54.105902  746423 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:18:54.109111  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:18:54.109263  746423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:18:54.109300  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json: {Name:mk94c562bb839278790e95c3cb5b92738df36261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:18:54.109584  746423 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:18:54.109991  746423 cache.go:107] acquiring lock: {Name:mk867258823b28c0a6577fb68ee404726b134331 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110067  746423 cache.go:115] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 21:18:54.110075  746423 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.6µs
	I1014 21:18:54.110096  746423 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 21:18:54.110109  746423 cache.go:107] acquiring lock: {Name:mk02685c74746afe884538e00ccba24a205a7a97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110216  746423 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:18:54.110447  746423 cache.go:107] acquiring lock: {Name:mkfd85b5cd748a277287b0e4a5c32afde0323ae4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110561  746423 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:18:54.110692  746423 cache.go:107] acquiring lock: {Name:mk61dc7cd6219ec2f627a7ba4a170447679fd40f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.110795  746423 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:18:54.110909  746423 cache.go:107] acquiring lock: {Name:mk06b01febd0a216a260e0d94f0882d94b3570e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111003  746423 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:18:54.111130  746423 cache.go:107] acquiring lock: {Name:mkd93b76296abb1739672d82097889947058106c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111228  746423 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1014 21:18:54.111377  746423 cache.go:107] acquiring lock: {Name:mk11554cc008e9d39e9e0a5de0ae2783f48765f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111463  746423 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:18:54.111584  746423 cache.go:107] acquiring lock: {Name:mk431d2046da2d67f2357e06c8f8bd43155a4f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.111697  746423 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:18:54.120529  746423 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1014 21:18:54.121122  746423 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:18:54.121278  746423 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:18:54.121896  746423 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:18:54.121918  746423 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:18:54.122608  746423 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:18:54.123044  746423 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:18:54.139038  746423 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:18:54.139071  746423 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:18:54.139088  746423 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:18:54.139115  746423 start.go:360] acquireMachinesLock for no-preload-970287: {Name:mk73033dcc861b0fd6c8055acaffbf9362e5f942 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:18:54.139216  746423 start.go:364] duration metric: took 85.13µs to acquireMachinesLock for "no-preload-970287"
	I1014 21:18:54.139242  746423 start.go:93] Provisioning new machine with config: &{Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:18:54.139301  746423 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:18:54.142966  746423 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:18:54.143216  746423 start.go:159] libmachine.API.Create for "no-preload-970287" (driver="docker")
	I1014 21:18:54.143261  746423 client.go:168] LocalClient.Create starting
	I1014 21:18:54.143350  746423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:18:54.143386  746423 main.go:141] libmachine: Decoding PEM data...
	I1014 21:18:54.143404  746423 main.go:141] libmachine: Parsing certificate...
	I1014 21:18:54.143460  746423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:18:54.143482  746423 main.go:141] libmachine: Decoding PEM data...
	I1014 21:18:54.143491  746423 main.go:141] libmachine: Parsing certificate...
	I1014 21:18:54.143871  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:18:54.168678  746423 cli_runner.go:211] docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:18:54.168763  746423 network_create.go:284] running [docker network inspect no-preload-970287] to gather additional debugging logs...
	I1014 21:18:54.168785  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287
	W1014 21:18:54.185805  746423 cli_runner.go:211] docker network inspect no-preload-970287 returned with exit code 1
	I1014 21:18:54.185837  746423 network_create.go:287] error running [docker network inspect no-preload-970287]: docker network inspect no-preload-970287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-970287 not found
	I1014 21:18:54.185864  746423 network_create.go:289] output of [docker network inspect no-preload-970287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-970287 not found
	
	** /stderr **
	I1014 21:18:54.185963  746423 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:18:54.214171  746423 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:18:54.214511  746423 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:18:54.214871  746423 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:18:54.215368  746423 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ca53a0}
	I1014 21:18:54.215403  746423 network_create.go:124] attempt to create docker network no-preload-970287 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1014 21:18:54.215507  746423 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-970287 no-preload-970287
	I1014 21:18:54.293184  746423 network_create.go:108] docker network no-preload-970287 192.168.76.0/24 created
	I1014 21:18:54.293212  746423 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-970287" container
	I1014 21:18:54.293304  746423 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:18:54.310533  746423 cli_runner.go:164] Run: docker volume create no-preload-970287 --label name.minikube.sigs.k8s.io=no-preload-970287 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:18:54.331185  746423 oci.go:103] Successfully created a docker volume no-preload-970287
	I1014 21:18:54.331299  746423 cli_runner.go:164] Run: docker run --rm --name no-preload-970287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970287 --entrypoint /usr/bin/test -v no-preload-970287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:18:54.461429  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1014 21:18:54.481316  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1014 21:18:54.481836  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1014 21:18:54.483701  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1014 21:18:54.485388  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1014 21:18:54.492475  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1014 21:18:54.494375  746423 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1014 21:18:54.556875  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1014 21:18:54.556946  746423 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 445.837995ms
	I1014 21:18:54.556976  746423 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1014 21:18:55.005852  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1014 21:18:55.005884  746423 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 894.975852ms
	I1014 21:18:55.005909  746423 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1014 21:18:55.057028  746423 oci.go:107] Successfully prepared a docker volume no-preload-970287
	I1014 21:18:55.057070  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1014 21:18:55.057218  746423 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:18:55.057382  746423 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:18:55.126561  746423 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-970287 --name no-preload-970287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-970287 --network no-preload-970287 --ip 192.168.76.2 --volume no-preload-970287:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:18:55.410188  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1014 21:18:55.410222  746423 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.299531785s
	I1014 21:18:55.410239  746423 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1014 21:18:55.502459  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1014 21:18:55.502529  746423 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.39094912s
	I1014 21:18:55.502556  746423 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1014 21:18:55.517095  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1014 21:18:55.517153  746423 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.407042773s
	I1014 21:18:55.517166  746423 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1014 21:18:55.532074  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1014 21:18:55.532102  746423 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.421658234s
	I1014 21:18:55.532114  746423 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1014 21:18:55.623868  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Running}}
	I1014 21:18:55.647427  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:55.681004  746423 cli_runner.go:164] Run: docker exec no-preload-970287 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:18:55.755589  746423 oci.go:144] the created container "no-preload-970287" has a running status.
	I1014 21:18:55.755637  746423 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa...
	I1014 21:18:56.822139  746423 cache.go:157] /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1014 21:18:56.822166  746423 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.710791483s
	I1014 21:18:56.822177  746423 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1014 21:18:56.822189  746423 cache.go:87] Successfully saved all images to host disk.
	I1014 21:18:57.315629  746423 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:18:57.337505  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:57.355191  746423 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:18:57.355215  746423 kic_runner.go:114] Args: [docker exec --privileged no-preload-970287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:18:57.396763  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:18:57.414943  746423 machine.go:93] provisionDockerMachine start ...
	I1014 21:18:57.415042  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.437624  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.437975  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.437990  746423 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:18:57.584052  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:18:57.584075  746423 ubuntu.go:182] provisioning hostname "no-preload-970287"
	I1014 21:18:57.584136  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.603133  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.603442  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.603453  746423 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-970287 && echo "no-preload-970287" | sudo tee /etc/hostname
	I1014 21:18:57.759403  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:18:57.759534  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:57.777495  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:57.777800  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:57.777821  746423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970287/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:18:57.920601  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:18:57.920629  746423 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:18:57.920677  746423 ubuntu.go:190] setting up certificates
	I1014 21:18:57.920688  746423 provision.go:84] configureAuth start
	I1014 21:18:57.920763  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:57.938434  746423 provision.go:143] copyHostCerts
	I1014 21:18:57.938510  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:18:57.938525  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:18:57.938608  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:18:57.938718  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:18:57.938732  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:18:57.938761  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:18:57.938842  746423 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:18:57.938852  746423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:18:57.938877  746423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:18:57.938929  746423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.no-preload-970287 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970287]
	I1014 21:18:58.224164  746423 provision.go:177] copyRemoteCerts
	I1014 21:18:58.224234  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:18:58.224285  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.242237  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:58.344024  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:18:58.365704  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:18:58.383225  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 21:18:58.400836  746423 provision.go:87] duration metric: took 480.130009ms to configureAuth
	I1014 21:18:58.400864  746423 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:18:58.401048  746423 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:18:58.401166  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.418465  746423 main.go:141] libmachine: Using SSH client type: native
	I1014 21:18:58.418792  746423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1014 21:18:58.418813  746423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:18:58.755328  746423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:18:58.755397  746423 machine.go:96] duration metric: took 1.34042981s to provisionDockerMachine
	I1014 21:18:58.755421  746423 client.go:171] duration metric: took 4.612152806s to LocalClient.Create
	I1014 21:18:58.755457  746423 start.go:167] duration metric: took 4.612246765s to libmachine.API.Create "no-preload-970287"
	I1014 21:18:58.755480  746423 start.go:293] postStartSetup for "no-preload-970287" (driver="docker")
	I1014 21:18:58.755511  746423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:18:58.755597  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:18:58.755674  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.772812  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:58.876836  746423 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:18:58.880257  746423 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:18:58.880284  746423 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:18:58.880295  746423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:18:58.880376  746423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:18:58.880466  746423 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:18:58.880580  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:18:58.888577  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:18:58.908626  746423 start.go:296] duration metric: took 153.112575ms for postStartSetup
	I1014 21:18:58.908986  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:58.925607  746423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:18:58.925893  746423 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:18:58.925941  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:58.942834  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.045763  746423 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:18:59.050588  746423 start.go:128] duration metric: took 4.911270597s to createHost
	I1014 21:18:59.050615  746423 start.go:83] releasing machines lock for "no-preload-970287", held for 4.91138936s
	I1014 21:18:59.050686  746423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:18:59.068563  746423 ssh_runner.go:195] Run: cat /version.json
	I1014 21:18:59.068631  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:59.068878  746423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:18:59.068954  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:18:59.088454  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.102129  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:18:59.187922  746423 ssh_runner.go:195] Run: systemctl --version
	I1014 21:18:59.310340  746423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:18:59.351789  746423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:18:59.356262  746423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:18:59.356359  746423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:18:59.386651  746423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:18:59.386673  746423 start.go:495] detecting cgroup driver to use...
	I1014 21:18:59.386705  746423 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:18:59.386768  746423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:18:59.404092  746423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:18:59.416693  746423 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:18:59.416752  746423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:18:59.434373  746423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:18:59.453817  746423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:18:59.582792  746423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:18:59.704343  746423 docker.go:234] disabling docker service ...
	I1014 21:18:59.704410  746423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:18:59.726268  746423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:18:59.739297  746423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:18:59.853316  746423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:18:59.985135  746423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:18:59.998106  746423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:19:00.072842  746423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:19:00.072990  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.107718  746423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:19:00.107850  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.140946  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.189159  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.235791  746423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:19:00.257249  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.281928  746423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.301541  746423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:00.342304  746423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:19:00.354925  746423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:19:00.371807  746423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:00.515628  746423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:19:00.642741  746423 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:19:00.642852  746423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:19:00.646755  746423 start.go:563] Will wait 60s for crictl version
	I1014 21:19:00.646858  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:00.650329  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:19:00.682991  746423 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:19:00.683118  746423 ssh_runner.go:195] Run: crio --version
	I1014 21:19:00.712211  746423 ssh_runner.go:195] Run: crio --version
	I1014 21:19:00.744661  746423 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:19:00.747514  746423 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:19:00.768655  746423 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:19:00.772562  746423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:19:00.782216  746423 kubeadm.go:883] updating cluster {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:19:00.782321  746423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:19:00.782365  746423 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:19:00.806370  746423 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 21:19:00.806396  746423 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 21:19:00.806457  746423 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:00.806667  746423 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:00.806782  746423 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:00.806876  746423 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:00.806973  746423 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:00.807074  746423 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1014 21:19:00.807178  746423 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:00.807273  746423 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:00.809023  746423 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:00.809074  746423 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:00.809132  746423 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:00.809023  746423 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:00.809250  746423 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:00.809294  746423 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:00.809336  746423 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1014 21:19:00.809382  746423 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:01.042719  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.063532  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.087387  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.088081  746423 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1014 21:19:01.088137  746423 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.088199  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.088782  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.102238  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.102328  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1014 21:19:01.102579  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.143929  746423 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1014 21:19:01.144032  746423 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.144133  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.216563  746423 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1014 21:19:01.216650  746423 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.216715  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.223351  746423 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1014 21:19:01.223532  746423 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.223589  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.223503  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.254557  746423 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1014 21:19:01.254641  746423 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.254711  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.283446  746423 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1014 21:19:01.283549  746423 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1014 21:19:01.283610  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.288890  746423 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1014 21:19:01.288944  746423 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.289042  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:01.289190  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.289283  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.289415  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.289443  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.289521  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.290415  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.387437  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.387549  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.400867  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.401012  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1014 21:19:01.401108  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.401196  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.401314  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.473889  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.474014  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1014 21:19:01.537893  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1014 21:19:01.537994  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1014 21:19:01.538027  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1014 21:19:01.538324  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1014 21:19:01.538076  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1014 21:19:01.538206  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1014 21:19:01.543605  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1014 21:19:01.543757  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:01.543866  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1014 21:19:01.637578  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1014 21:19:01.637682  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1014 21:19:01.637758  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1014 21:19:01.637804  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1014 21:19:01.637857  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1014 21:19:01.637902  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1014 21:19:01.637956  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1014 21:19:01.637969  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1014 21:19:01.638025  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1014 21:19:01.638075  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1014 21:19:01.638145  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1014 21:19:01.638194  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1014 21:19:01.638262  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1014 21:19:01.638277  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1014 21:19:01.702143  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1014 21:19:01.702226  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1014 21:19:01.702317  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1014 21:19:01.702350  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1014 21:19:01.702421  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1014 21:19:01.702452  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1014 21:19:01.702537  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1014 21:19:01.702567  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1014 21:19:01.702646  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1014 21:19:01.702675  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1014 21:19:01.865676  746423 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1014 21:19:01.865773  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1014 21:19:02.133004  746423 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 21:19:02.133210  746423 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:02.312130  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1014 21:19:02.312162  746423 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:02.312303  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1014 21:19:02.341206  746423 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1014 21:19:02.341252  746423 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:02.341402  746423 ssh_runner.go:195] Run: which crictl
	I1014 21:19:04.401313  746423 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.088968851s)
	I1014 21:19:04.401344  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1014 21:19:04.401352  746423 ssh_runner.go:235] Completed: which crictl: (2.059930923s)
	I1014 21:19:04.401362  746423 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1014 21:19:04.401411  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1014 21:19:04.401411  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:05.862780  746423 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.46129043s)
	I1014 21:19:05.862855  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:05.862917  746423 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.461494201s)
	I1014 21:19:05.862930  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1014 21:19:05.862946  746423 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1014 21:19:05.862980  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1014 21:19:05.948794  746423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:08.178260  746423 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.229431661s)
	I1014 21:19:08.178308  746423 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 21:19:08.178397  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 21:19:08.178786  746423 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.315790359s)
	I1014 21:19:08.178808  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1014 21:19:08.178826  746423 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1014 21:19:08.178874  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1014 21:19:08.186627  746423 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1014 21:19:08.186663  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	
	
	==> CRI-O <==
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.183987427Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fe9ec24a-1e4e-41f3-9394-489332967f4f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.184930242Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a06c3b40-4e2c-44df-8751-bf9fc94afa99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.185157414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.204856201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205075874Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc2c3cfcd010e507d4448b2c09e2bae5594daaa69b03f658d9993697f67458c4/merged/etc/passwd: no such file or directory"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205106882Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc2c3cfcd010e507d4448b2c09e2bae5594daaa69b03f658d9993697f67458c4/merged/etc/group: no such file or directory"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.205432058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.238456843Z" level=info msg="Created container 640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13: kube-system/storage-provisioner/storage-provisioner" id=a06c3b40-4e2c-44df-8751-bf9fc94afa99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.24068085Z" level=info msg="Starting container: 640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13" id=9fe3f59b-d7e9-449d-be89-d88fb2a0f224 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:18:44 embed-certs-650951 crio[651]: time="2025-10-14T21:18:44.243810575Z" level=info msg="Started container" PID=1647 containerID=640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13 description=kube-system/storage-provisioner/storage-provisioner id=9fe3f59b-d7e9-449d-be89-d88fb2a0f224 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aabfd233af99d5d40db7b81400259b2b3235088466ab85949bf3965c7d396d73
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.685548196Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690622733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690662316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.690682189Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.694770178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.694961222Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.695044194Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.699822151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.699973334Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.700044736Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710243027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710427343Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.710548674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.713782697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:18:53 embed-certs-650951 crio[651]: time="2025-10-14T21:18:53.713933723Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	640b830bec549       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   aabfd233af99d       storage-provisioner                          kube-system
	981824df30820       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   26de3a92b7d29       dashboard-metrics-scraper-6ffb444bf9-shfk2   kubernetes-dashboard
	f8e20f3e5a5c8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   beec1b676f773       kubernetes-dashboard-855c9754f9-7m6nl        kubernetes-dashboard
	6353f34d398a5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   4edf27967113a       busybox                                      default
	9bd0bfe570cf6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   8ee6cc8890364       coredns-66bc5c9577-sllzf                     kube-system
	723dfd75d8534       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   101e4a51f7192       kindnet-x2jcr                                kube-system
	bd1a815a5e713       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   2ce7335ca1136       kube-proxy-2xsjs                             kube-system
	1ec4896f4fa53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   aabfd233af99d       storage-provisioner                          kube-system
	75ec260b360ce       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bf9b0ada461dd       kube-controller-manager-embed-certs-650951   kube-system
	26600dc7ca36c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d5d16aaf738ee       etcd-embed-certs-650951                      kube-system
	9a14bf7659c1b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c3f0ff7ac40fd       kube-apiserver-embed-certs-650951            kube-system
	af87a8da5c171       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5052fc90a0dc6       kube-scheduler-embed-certs-650951            kube-system
	
	
	==> coredns [9bd0bfe570cf69a832134e6d457e4570261271dfe99b30a8105ef7eff7981b88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39536 - 62945 "HINFO IN 2808594643827052679.7558689821458387517. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024777895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-650951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-650951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=embed-certs-650951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_16_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:16:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-650951
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:19:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:18:43 +0000   Tue, 14 Oct 2025 21:17:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-650951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcbe15b75c40400db85e1834bebb59e4
	  System UUID:                27698974-6461-4c41-a998-671addfa4ff7
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-sllzf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-650951                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-x2jcr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-650951             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-650951    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-2xsjs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-650951             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-shfk2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7m6nl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-650951 event: Registered Node embed-certs-650951 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-650951 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 65s)      kubelet          Node embed-certs-650951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 65s)      kubelet          Node embed-certs-650951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 65s)      kubelet          Node embed-certs-650951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node embed-certs-650951 event: Registered Node embed-certs-650951 in Controller
	
	
	==> dmesg <==
	[Oct14 20:47] overlayfs: idmapped layers are currently not supported
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [26600dc7ca36cf22f70e429a378f851aa6e2859e07559b7093ae575df3166637] <==
	{"level":"warn","ts":"2025-10-14T21:18:10.585347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.607017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.630444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.644875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.704904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.717906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.727617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.751408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.766921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.786466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.822988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.837855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.861011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.883908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.934882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.975047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:10.988946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.008621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.026146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.044513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.074442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.095981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.111750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.127141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:18:11.181455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51490","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:19:10 up  4:01,  0 user,  load average: 4.47, 2.78, 2.08
	Linux embed-certs-650951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [723dfd75d8534883b0a233c3f02128ece81fcf6a78b873b00b6e865a1aaaf248] <==
	I1014 21:18:13.500049       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:18:13.500270       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:18:13.500417       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:18:13.500428       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:18:13.500442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:18:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:18:13.683106       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:18:13.683900       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:18:13.684006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:18:13.684244       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:18:43.683715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1014 21:18:43.683840       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:18:43.683850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:18:43.683953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1014 21:18:45.184172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:18:45.184216       1 metrics.go:72] Registering metrics
	I1014 21:18:45.184500       1 controller.go:711] "Syncing nftables rules"
	I1014 21:18:53.684418       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:18:53.684554       1 main.go:301] handling current node
	I1014 21:19:03.688377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1014 21:19:03.688411       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a14bf7659c1b6c8efd0475e0eaeb245bdc49f5e97446a64f00af1e088d658af] <==
	I1014 21:18:12.187617       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:18:12.205945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:18:12.217840       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:18:12.241178       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1014 21:18:12.247503       1 cache.go:39] Caches are synced for autoregister controller
	I1014 21:18:12.260772       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 21:18:12.260806       1 policy_source.go:240] refreshing policies
	I1014 21:18:12.261067       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1014 21:18:12.289442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:18:12.297179       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:18:12.297209       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:18:12.307478       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:18:12.327248       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1014 21:18:12.381915       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:18:12.760608       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:18:12.800687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:18:12.815772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:18:12.886886       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:18:12.899381       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:18:12.912748       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:18:13.094639       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.245.117"}
	I1014 21:18:13.126960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.244.101"}
	I1014 21:18:15.439127       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:18:15.641586       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:18:15.847692       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [75ec260b360cee33d35761419902c0686412140608fc505b0a7fd6b06809b9be] <==
	I1014 21:18:15.268247       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:18:15.270042       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:18:15.271166       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:18:15.275463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:18:15.277544       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:18:15.279734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:18:15.282114       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:18:15.282242       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 21:18:15.282311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:18:15.282473       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:18:15.282522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:18:15.282546       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:18:15.283574       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:18:15.283658       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:18:15.285104       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:18:15.303861       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:18:15.303916       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:18:15.303936       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:18:15.303941       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:18:15.303946       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:18:15.307558       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:18:15.307656       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:18:15.307731       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-650951"
	I1014 21:18:15.307801       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 21:18:15.853063       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bd1a815a5e7132e6fea924ac0892c08444dc8492532d62c704c55ddb2bee3069] <==
	I1014 21:18:13.561949       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:18:13.665885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:18:13.766574       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:18:13.766623       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:18:13.766798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:18:13.790832       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:18:13.790891       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:18:13.794700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:18:13.795012       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:18:13.795037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:18:13.796242       1 config.go:200] "Starting service config controller"
	I1014 21:18:13.796446       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:18:13.799744       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:18:13.799818       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:18:13.799860       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:18:13.799889       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:18:13.800699       1 config.go:309] "Starting node config controller"
	I1014 21:18:13.800767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:18:13.800796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:18:13.904514       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:18:13.904609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:18:13.904697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [af87a8da5c17113367f1c826c6f15aabf7ed94ed0d2a08fe11374ca367dd615c] <==
	I1014 21:18:09.411153       1 serving.go:386] Generated self-signed cert in-memory
	W1014 21:18:12.038650       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 21:18:12.038685       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 21:18:12.038695       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 21:18:12.038702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 21:18:12.227353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:18:12.227387       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:18:12.233589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:18:12.234951       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:18:12.236370       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:18:12.236454       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:18:12.339009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:18:15 embed-certs-650951 kubelet[777]: I1014 21:18:15.927368     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d573b28f-976d-41c7-8f99-865f3358595f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-shfk2\" (UID: \"d573b28f-976d-41c7-8f99-865f3358595f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2"
	Oct 14 21:18:16 embed-certs-650951 kubelet[777]: W1014 21:18:16.120076     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a WatchSource:0}: Error finding container 26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a: Status 404 returned error can't find the container with id 26de3a92b7d295ab8895d71a771743b91702b3b5a84bff29208d53d53cb16c7a
	Oct 14 21:18:16 embed-certs-650951 kubelet[777]: W1014 21:18:16.137839     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b2e56cd2dd3035abddd578ee0768948728ba5c95553607a4315344d244c52749/crio-beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff WatchSource:0}: Error finding container beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff: Status 404 returned error can't find the container with id beec1b676f773f7f1684c728a4e9608905be8fa14642d4fe3b44274e589f97ff
	Oct 14 21:18:19 embed-certs-650951 kubelet[777]: I1014 21:18:19.927004     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 21:18:21 embed-certs-650951 kubelet[777]: I1014 21:18:21.099662     777 scope.go:117] "RemoveContainer" containerID="7c76f8d1d581943659cf7a8ab777e16740a06133d5d29e3d85cab16bf098624d"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: I1014 21:18:22.105481     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: E1014 21:18:22.105926     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:22 embed-certs-650951 kubelet[777]: I1014 21:18:22.107191     777 scope.go:117] "RemoveContainer" containerID="7c76f8d1d581943659cf7a8ab777e16740a06133d5d29e3d85cab16bf098624d"
	Oct 14 21:18:23 embed-certs-650951 kubelet[777]: I1014 21:18:23.122876     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:23 embed-certs-650951 kubelet[777]: E1014 21:18:23.123896     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:26 embed-certs-650951 kubelet[777]: I1014 21:18:26.085122     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:26 embed-certs-650951 kubelet[777]: E1014 21:18:26.085794     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:40 embed-certs-650951 kubelet[777]: I1014 21:18:40.971009     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.169142     777 scope.go:117] "RemoveContainer" containerID="dc60db24d3e69f3ce154221b6bbe82ef8c6f2b94857c30de6caf3f807dc8418f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.169350     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: E1014 21:18:41.169543     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:18:41 embed-certs-650951 kubelet[777]: I1014 21:18:41.193453     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m6nl" podStartSLOduration=16.593005132000002 podStartE2EDuration="26.19163666s" podCreationTimestamp="2025-10-14 21:18:15 +0000 UTC" firstStartedPulling="2025-10-14 21:18:16.141599324 +0000 UTC m=+10.499880430" lastFinishedPulling="2025-10-14 21:18:25.740230834 +0000 UTC m=+20.098511958" observedRunningTime="2025-10-14 21:18:26.151637756 +0000 UTC m=+20.509918879" watchObservedRunningTime="2025-10-14 21:18:41.19163666 +0000 UTC m=+35.549917775"
	Oct 14 21:18:44 embed-certs-650951 kubelet[777]: I1014 21:18:44.182238     777 scope.go:117] "RemoveContainer" containerID="1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7"
	Oct 14 21:18:46 embed-certs-650951 kubelet[777]: I1014 21:18:46.085032     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:18:46 embed-certs-650951 kubelet[777]: E1014 21:18:46.085218     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:19:00 embed-certs-650951 kubelet[777]: I1014 21:19:00.970240     777 scope.go:117] "RemoveContainer" containerID="981824df308207c62b5bdaac3e3ae1cc93631eae4528ed56bd8441f91008ee58"
	Oct 14 21:19:00 embed-certs-650951 kubelet[777]: E1014 21:19:00.970407     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-shfk2_kubernetes-dashboard(d573b28f-976d-41c7-8f99-865f3358595f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-shfk2" podUID="d573b28f-976d-41c7-8f99-865f3358595f"
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:19:04 embed-certs-650951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f8e20f3e5a5c85378adc47d0f5dc7dae779c420c3c2c99ef013b4e3e8a804df1] <==
	2025/10/14 21:18:25 Using namespace: kubernetes-dashboard
	2025/10/14 21:18:25 Using in-cluster config to connect to apiserver
	2025/10/14 21:18:25 Using secret token for csrf signing
	2025/10/14 21:18:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:18:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:18:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:18:25 Generating JWE encryption key
	2025/10/14 21:18:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:18:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:18:26 Initializing JWE encryption key from synchronized object
	2025/10/14 21:18:26 Creating in-cluster Sidecar client
	2025/10/14 21:18:26 Serving insecurely on HTTP port: 9090
	2025/10/14 21:18:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:18:25 Starting overwatch
	
	
	==> storage-provisioner [1ec4896f4fa53d8712ff6b3a2795cdf529deeb135de65cde7135870485394ab7] <==
	I1014 21:18:13.547176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:18:43.549705       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [640b830bec54943fa0a57e6d768ea1a14473244182a0769946d80d3ae3f48f13] <==
	I1014 21:18:44.264444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:18:44.276905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:18:44.276966       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:18:44.279202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:47.743458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:52.005690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:55.604287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:18:58.658145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.680270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.685322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:19:01.685560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:19:01.685757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db!
	I1014 21:19:01.686768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e765fc2b-374f-4cc2-922a-737fb3491cd4", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db became leader
	W1014 21:19:01.692488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:01.701680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:19:01.786223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-650951_dda90deb-4706-46fd-9f4f-26ece641b0db!
	W1014 21:19:03.706188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:03.716781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:05.726063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:05.733179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:07.737745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:07.745950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:09.750578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:19:09.756765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-650951 -n embed-certs-650951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-650951 -n embed-certs-650951: exit status 2 (494.874009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-650951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.802787ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718270
helpers_test.go:243: (dbg) docker inspect newest-cni-718270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	        "Created": "2025-10-14T21:19:22.499110218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 750611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:19:22.574730089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hostname",
	        "HostsPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hosts",
	        "LogPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370-json.log",
	        "Name": "/newest-cni-718270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	                "LowerDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718270",
	                "Source": "/var/lib/docker/volumes/newest-cni-718270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718270",
	                "name.minikube.sigs.k8s.io": "newest-cni-718270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fa5f4ec6036998af43256c861c16bdca8dd6550f8d560c83c47d47ecfef7b21",
	            "SandboxKey": "/var/run/docker/netns/7fa5f4ec6036",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f6:7c:fe:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18b1986d1416397d99b38ffdc32e9f008860c1b13740e1977cc68be30c32f62d",
	                    "EndpointID": "6a566451301c5e533820fa15593cf740c8c24920c50029570da7937e1b2d8c1b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718270",
	                        "9615898dd977"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25: (1.359655107s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-624341 image list --format=json                                                                                                                                                                                               │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │ 14 Oct 25 21:15 UTC │
	│ pause   │ -p old-k8s-version-624341 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:15 UTC │                     │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ delete  │ -p old-k8s-version-624341                                                                                                                                                                                                                     │ old-k8s-version-624341       │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:19:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:19:14.707589  749946 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:19:14.707837  749946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:19:14.707865  749946 out.go:374] Setting ErrFile to fd 2...
	I1014 21:19:14.707884  749946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:19:14.708185  749946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:19:14.708655  749946 out.go:368] Setting JSON to false
	I1014 21:19:14.709616  749946 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14505,"bootTime":1760462250,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:19:14.709708  749946 start.go:141] virtualization:  
	I1014 21:19:14.713993  749946 out.go:179] * [newest-cni-718270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:19:14.717265  749946 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:19:14.717332  749946 notify.go:220] Checking for updates...
	I1014 21:19:14.723474  749946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:19:14.726710  749946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:19:14.729865  749946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:19:14.732733  749946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:19:14.735839  749946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:19:14.740239  749946 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:19:14.740491  749946 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:19:14.780788  749946 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:19:14.780919  749946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:19:14.879711  749946 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-14 21:19:14.869758139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:19:14.879818  749946 docker.go:318] overlay module found
	I1014 21:19:14.884106  749946 out.go:179] * Using the docker driver based on user configuration
	I1014 21:19:14.889122  749946 start.go:305] selected driver: docker
	I1014 21:19:14.889148  749946 start.go:925] validating driver "docker" against <nil>
	I1014 21:19:14.889161  749946 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:19:14.889860  749946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:19:14.986980  749946 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-14 21:19:14.974145231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:19:14.987151  749946 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1014 21:19:14.987173  749946 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1014 21:19:14.987434  749946 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:19:14.992516  749946 out.go:179] * Using Docker driver with root privileges
	I1014 21:19:14.995522  749946 cni.go:84] Creating CNI manager for ""
	I1014 21:19:14.995592  749946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:19:14.995600  749946 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:19:14.995688  749946 start.go:349] cluster config:
	{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:19:15.002775  749946 out.go:179] * Starting "newest-cni-718270" primary control-plane node in "newest-cni-718270" cluster
	I1014 21:19:15.006351  749946 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:19:15.012305  749946 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:19:15.024334  749946 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:19:15.024405  749946 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:19:15.024419  749946 cache.go:58] Caching tarball of preloaded images
	I1014 21:19:15.024557  749946 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:19:15.024929  749946 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:19:15.024970  749946 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:19:15.025177  749946 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:19:15.025226  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json: {Name:mkd516beeaf9d98cec6352be7ea58da22a6af134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:15.065845  749946 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:19:15.065876  749946 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:19:15.065899  749946 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:19:15.065925  749946 start.go:360] acquireMachinesLock for newest-cni-718270: {Name:mk4b94f993721098e6e50976108704497d217d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:19:15.066044  749946 start.go:364] duration metric: took 96.379µs to acquireMachinesLock for "newest-cni-718270"
	I1014 21:19:15.066078  749946 start.go:93] Provisioning new machine with config: &{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:19:15.066157  749946 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:19:16.774093  746423 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.725504958s)
	I1014 21:19:16.774121  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1014 21:19:16.774138  746423 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 21:19:16.774183  746423 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 21:19:17.501758  746423 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-545047/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 21:19:17.501790  746423 cache_images.go:124] Successfully loaded all cached images
	I1014 21:19:17.501796  746423 cache_images.go:93] duration metric: took 16.695383554s to LoadCachedImages
	I1014 21:19:17.501807  746423 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:19:17.501906  746423 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-970287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:19:17.501983  746423 ssh_runner.go:195] Run: crio config
	I1014 21:19:17.595232  746423 cni.go:84] Creating CNI manager for ""
	I1014 21:19:17.595254  746423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:19:17.595271  746423 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:19:17.595297  746423 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970287 NodeName:no-preload-970287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:19:17.595454  746423 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-970287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:19:17.595533  746423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:19:17.606130  746423 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1014 21:19:17.606198  746423 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1014 21:19:17.615310  746423 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1014 21:19:17.615404  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1014 21:19:17.616214  746423 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1014 21:19:17.616753  746423 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1014 21:19:17.622517  746423 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1014 21:19:17.622555  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1014 21:19:18.677795  746423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:19:18.704706  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1014 21:19:18.720597  746423 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1014 21:19:18.720681  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1014 21:19:18.776017  746423 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1014 21:19:18.786530  746423 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1014 21:19:18.786562  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1014 21:19:15.069690  749946 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:19:15.069969  749946 start.go:159] libmachine.API.Create for "newest-cni-718270" (driver="docker")
	I1014 21:19:15.070022  749946 client.go:168] LocalClient.Create starting
	I1014 21:19:15.070102  749946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:19:15.070143  749946 main.go:141] libmachine: Decoding PEM data...
	I1014 21:19:15.070166  749946 main.go:141] libmachine: Parsing certificate...
	I1014 21:19:15.070226  749946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:19:15.070252  749946 main.go:141] libmachine: Decoding PEM data...
	I1014 21:19:15.070266  749946 main.go:141] libmachine: Parsing certificate...
	I1014 21:19:15.070649  749946 cli_runner.go:164] Run: docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:19:15.090034  749946 cli_runner.go:211] docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:19:15.090118  749946 network_create.go:284] running [docker network inspect newest-cni-718270] to gather additional debugging logs...
	I1014 21:19:15.090148  749946 cli_runner.go:164] Run: docker network inspect newest-cni-718270
	W1014 21:19:15.109974  749946 cli_runner.go:211] docker network inspect newest-cni-718270 returned with exit code 1
	I1014 21:19:15.110024  749946 network_create.go:287] error running [docker network inspect newest-cni-718270]: docker network inspect newest-cni-718270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-718270 not found
	I1014 21:19:15.110040  749946 network_create.go:289] output of [docker network inspect newest-cni-718270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-718270 not found
	
	** /stderr **
	I1014 21:19:15.110159  749946 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:19:15.138451  749946 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:19:15.138915  749946 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:19:15.139396  749946 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:19:15.139835  749946 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-006eb97cac5b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:d4:13:6b:f6:4c} reservation:<nil>}
	I1014 21:19:15.140461  749946 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a256f0}
	I1014 21:19:15.140496  749946 network_create.go:124] attempt to create docker network newest-cni-718270 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:19:15.140580  749946 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-718270 newest-cni-718270
	I1014 21:19:15.222809  749946 network_create.go:108] docker network newest-cni-718270 192.168.85.0/24 created
	I1014 21:19:15.222855  749946 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-718270" container
	I1014 21:19:15.222929  749946 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:19:15.239305  749946 cli_runner.go:164] Run: docker volume create newest-cni-718270 --label name.minikube.sigs.k8s.io=newest-cni-718270 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:19:15.260064  749946 oci.go:103] Successfully created a docker volume newest-cni-718270
	I1014 21:19:15.260164  749946 cli_runner.go:164] Run: docker run --rm --name newest-cni-718270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-718270 --entrypoint /usr/bin/test -v newest-cni-718270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:19:16.360465  749946 cli_runner.go:217] Completed: docker run --rm --name newest-cni-718270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-718270 --entrypoint /usr/bin/test -v newest-cni-718270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.100255979s)
	I1014 21:19:16.360491  749946 oci.go:107] Successfully prepared a docker volume newest-cni-718270
	I1014 21:19:16.360520  749946 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:19:16.360538  749946 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:19:16.360605  749946 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-718270:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:19:19.630000  746423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:19:19.641556  746423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:19:19.656395  746423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:19:19.670238  746423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1014 21:19:19.684121  746423 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:19:19.688826  746423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:19:19.698562  746423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:19.862036  746423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:19:19.887293  746423 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287 for IP: 192.168.76.2
	I1014 21:19:19.887315  746423 certs.go:195] generating shared ca certs ...
	I1014 21:19:19.887332  746423 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:19.887467  746423 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:19:19.887512  746423 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:19:19.887523  746423 certs.go:257] generating profile certs ...
	I1014 21:19:19.887579  746423 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.key
	I1014 21:19:19.887594  746423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt with IP's: []
	I1014 21:19:20.002719  746423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt ...
	I1014 21:19:20.002760  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: {Name:mk894c05f3239e25579b693fe284a4f3f830c157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.003009  746423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.key ...
	I1014 21:19:20.003025  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.key: {Name:mk96985378cb5a06787f664a80fbb41879cb66fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.003135  746423 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f
	I1014 21:19:20.003155  746423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt.5c06cc4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1014 21:19:20.649365  746423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt.5c06cc4f ...
	I1014 21:19:20.649399  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt.5c06cc4f: {Name:mk96df7e38bcd06bb8ab458176db897d0b820661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.649590  746423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f ...
	I1014 21:19:20.649604  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f: {Name:mk5ba25799fce11b11fd54d33fad57c838d703bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.649694  746423 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt.5c06cc4f -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt
	I1014 21:19:20.649769  746423 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key
	I1014 21:19:20.649828  746423 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key
	I1014 21:19:20.649846  746423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt with IP's: []
	I1014 21:19:20.958601  746423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt ...
	I1014 21:19:20.958633  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt: {Name:mk0459f5d8437553a25ab8fe7bfeba76a3728298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.958822  746423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key ...
	I1014 21:19:20.958838  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key: {Name:mk264a70188630c4ade87963694b8021cc002cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:20.959034  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:19:20.959078  746423 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:19:20.959094  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:19:20.959119  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:19:20.959147  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:19:20.959172  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:19:20.959216  746423 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:19:20.959871  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:19:20.979452  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:19:21.001096  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:19:21.022395  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:19:21.041944  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:19:21.060103  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:19:21.079073  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:19:21.097644  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:19:21.117201  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:19:21.137550  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:19:21.157347  746423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:19:21.176212  746423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:19:21.189623  746423 ssh_runner.go:195] Run: openssl version
	I1014 21:19:21.211323  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:19:21.223019  746423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:21.227727  746423 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:21.227818  746423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:21.269195  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:19:21.278176  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:19:21.287079  746423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:19:21.291593  746423 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:19:21.291658  746423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:19:21.338360  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:19:21.348389  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:19:21.357766  746423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:19:21.362835  746423 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:19:21.362928  746423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:19:21.404500  746423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:19:21.413790  746423 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:19:21.418187  746423 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:19:21.418249  746423 kubeadm.go:400] StartCluster: {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:19:21.418322  746423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:19:21.418382  746423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:19:21.449984  746423 cri.go:89] found id: ""
	I1014 21:19:21.450137  746423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:19:21.460249  746423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:19:21.468924  746423 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:19:21.468990  746423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:19:21.477787  746423 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:19:21.477808  746423 kubeadm.go:157] found existing configuration files:
	
	I1014 21:19:21.477860  746423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:19:21.486329  746423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:19:21.486445  746423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:19:21.495081  746423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:19:21.504418  746423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:19:21.504488  746423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:19:21.513343  746423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:19:21.523783  746423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:19:21.523903  746423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:19:21.538151  746423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:19:21.552242  746423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:19:21.552390  746423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:19:21.574836  746423 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:19:21.620539  746423 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:19:21.621309  746423 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:19:21.649046  746423 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:19:21.649149  746423 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:19:21.649236  746423 kubeadm.go:318] OS: Linux
	I1014 21:19:21.649302  746423 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:19:21.649381  746423 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:19:21.649469  746423 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:19:21.649532  746423 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:19:21.649583  746423 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:19:21.649634  746423 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:19:21.649682  746423 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:19:21.649733  746423 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:19:21.649781  746423 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:19:21.718079  746423 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:19:21.718231  746423 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:19:21.718359  746423 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:19:21.732842  746423 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:19:21.795654  746423 out.go:252]   - Generating certificates and keys ...
	I1014 21:19:21.795774  746423 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:19:21.795858  746423 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:19:22.129642  746423 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:19:22.603223  746423 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:19:23.057160  746423 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:19:23.182744  746423 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:19:23.740299  746423 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:19:23.740967  746423 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-970287] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:19:22.400245  749946 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-718270:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (6.039602709s)
	I1014 21:19:22.400273  749946 kic.go:203] duration metric: took 6.039730582s to extract preloaded images to volume ...
	W1014 21:19:22.400575  749946 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:19:22.400676  749946 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:19:22.482377  749946 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-718270 --name newest-cni-718270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-718270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-718270 --network newest-cni-718270 --ip 192.168.85.2 --volume newest-cni-718270:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:19:22.864527  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Running}}
	I1014 21:19:22.887182  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:19:22.917031  749946 cli_runner.go:164] Run: docker exec newest-cni-718270 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:19:22.981282  749946 oci.go:144] the created container "newest-cni-718270" has a running status.
	I1014 21:19:22.981319  749946 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa...
	I1014 21:19:23.404229  749946 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:19:23.439186  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:19:23.474246  749946 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:19:23.474286  749946 kic_runner.go:114] Args: [docker exec --privileged newest-cni-718270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:19:23.544204  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:19:23.568480  749946 machine.go:93] provisionDockerMachine start ...
	I1014 21:19:23.568579  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:23.607140  749946 main.go:141] libmachine: Using SSH client type: native
	I1014 21:19:23.607480  749946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1014 21:19:23.607490  749946 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:19:23.608213  749946 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47628->127.0.0.1:33817: read: connection reset by peer
	I1014 21:19:24.108663  746423 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:19:24.108977  746423 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-970287] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1014 21:19:24.608627  746423 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:19:24.716246  746423 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:19:25.085371  746423 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:19:25.085664  746423 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:19:26.364547  746423 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:19:27.333853  746423 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:19:27.633581  746423 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:19:28.453804  746423 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:19:28.633368  746423 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:19:28.634497  746423 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:19:28.637659  746423 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:19:28.641908  746423 out.go:252]   - Booting up control plane ...
	I1014 21:19:28.642030  746423 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:19:28.642369  746423 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:19:28.643910  746423 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:19:28.664709  746423 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:19:28.665315  746423 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:19:28.674846  746423 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:19:28.674958  746423 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:19:28.675004  746423 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:19:28.843456  746423 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:19:28.843587  746423 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:19:26.760018  749946 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:19:26.760043  749946 ubuntu.go:182] provisioning hostname "newest-cni-718270"
	I1014 21:19:26.760117  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:26.787845  749946 main.go:141] libmachine: Using SSH client type: native
	I1014 21:19:26.788159  749946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1014 21:19:26.788176  749946 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718270 && echo "newest-cni-718270" | sudo tee /etc/hostname
	I1014 21:19:26.958987  749946 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:19:26.959146  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:26.990093  749946 main.go:141] libmachine: Using SSH client type: native
	I1014 21:19:26.990429  749946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1014 21:19:26.990447  749946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:19:27.156814  749946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:19:27.156890  749946 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:19:27.156927  749946 ubuntu.go:190] setting up certificates
	I1014 21:19:27.156963  749946 provision.go:84] configureAuth start
	I1014 21:19:27.157060  749946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:19:27.179178  749946 provision.go:143] copyHostCerts
	I1014 21:19:27.179244  749946 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:19:27.179254  749946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:19:27.179330  749946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:19:27.179423  749946 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:19:27.179432  749946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:19:27.179460  749946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:19:27.179517  749946 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:19:27.179522  749946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:19:27.179549  749946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:19:27.179601  749946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718270 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718270]
	I1014 21:19:27.546333  749946 provision.go:177] copyRemoteCerts
	I1014 21:19:27.548452  749946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:19:27.548554  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:27.582167  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:19:27.708901  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:19:27.727882  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:19:27.746888  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:19:27.766157  749946 provision.go:87] duration metric: took 609.152725ms to configureAuth
	I1014 21:19:27.766185  749946 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:19:27.766423  749946 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:19:27.766564  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:27.786681  749946 main.go:141] libmachine: Using SSH client type: native
	I1014 21:19:27.787015  749946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1014 21:19:27.787037  749946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:19:28.100649  749946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:19:28.100675  749946 machine.go:96] duration metric: took 4.53216753s to provisionDockerMachine
	I1014 21:19:28.100686  749946 client.go:171] duration metric: took 13.030652975s to LocalClient.Create
	I1014 21:19:28.100703  749946 start.go:167] duration metric: took 13.030736931s to libmachine.API.Create "newest-cni-718270"
	I1014 21:19:28.100711  749946 start.go:293] postStartSetup for "newest-cni-718270" (driver="docker")
	I1014 21:19:28.100721  749946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:19:28.100796  749946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:19:28.100836  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:28.125911  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:19:28.237986  749946 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:19:28.242086  749946 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:19:28.242119  749946 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:19:28.242137  749946 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:19:28.242200  749946 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:19:28.242289  749946 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:19:28.242393  749946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:19:28.251007  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:19:28.273038  749946 start.go:296] duration metric: took 172.312568ms for postStartSetup
	I1014 21:19:28.273454  749946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:19:28.297863  749946 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:19:28.298148  749946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:19:28.298189  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:28.329876  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:19:28.441558  749946 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:19:28.447107  749946 start.go:128] duration metric: took 13.380932221s to createHost
	I1014 21:19:28.447133  749946 start.go:83] releasing machines lock for "newest-cni-718270", held for 13.381075543s
	I1014 21:19:28.447208  749946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:19:28.470557  749946 ssh_runner.go:195] Run: cat /version.json
	I1014 21:19:28.470614  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:28.470687  749946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:19:28.470745  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:19:28.500909  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:19:28.512151  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:19:28.621967  749946 ssh_runner.go:195] Run: systemctl --version
	I1014 21:19:28.723513  749946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:19:28.778837  749946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:19:28.784949  749946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:19:28.785026  749946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:19:28.835873  749946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:19:28.835908  749946 start.go:495] detecting cgroup driver to use...
	I1014 21:19:28.835940  749946 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:19:28.835996  749946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:19:28.858665  749946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:19:28.873505  749946 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:19:28.873574  749946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:19:28.893172  749946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:19:28.913255  749946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:19:29.067239  749946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:19:29.236120  749946 docker.go:234] disabling docker service ...
	I1014 21:19:29.236240  749946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:19:29.261575  749946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:19:29.276586  749946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:19:29.418917  749946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:19:29.575774  749946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:19:29.591573  749946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:19:29.606433  749946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:19:29.606528  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.615807  749946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:19:29.615931  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.625364  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.634748  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.644029  749946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:19:29.652789  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.661814  749946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.675971  749946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:19:29.685002  749946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:19:29.694540  749946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:19:29.702176  749946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:29.858897  749946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:19:30.002082  749946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:19:30.002213  749946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:19:30.008161  749946 start.go:563] Will wait 60s for crictl version
	I1014 21:19:30.008288  749946 ssh_runner.go:195] Run: which crictl
	I1014 21:19:30.013907  749946 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:19:30.084715  749946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:19:30.084865  749946 ssh_runner.go:195] Run: crio --version
	I1014 21:19:30.126420  749946 ssh_runner.go:195] Run: crio --version
	I1014 21:19:30.172773  749946 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:19:30.175674  749946 cli_runner.go:164] Run: docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:19:30.198372  749946 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:19:30.203005  749946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:19:30.226122  749946 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 21:19:30.848671  746423 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001690152s
	I1014 21:19:30.848784  746423 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:19:30.848873  746423 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1014 21:19:30.848966  746423 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:19:30.849047  746423 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:19:30.228998  749946 kubeadm.go:883] updating cluster {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:19:30.229148  749946 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:19:30.229236  749946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:19:30.271640  749946 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:19:30.271668  749946 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:19:30.271726  749946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:19:30.307775  749946 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:19:30.307800  749946 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:19:30.307810  749946 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:19:30.307965  749946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:19:30.308076  749946 ssh_runner.go:195] Run: crio config
	I1014 21:19:30.400844  749946 cni.go:84] Creating CNI manager for ""
	I1014 21:19:30.400875  749946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:19:30.400896  749946 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1014 21:19:30.400947  749946 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718270 NodeName:newest-cni-718270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:19:30.401117  749946 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:19:30.401210  749946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:19:30.410244  749946 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:19:30.410347  749946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:19:30.422973  749946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:19:30.435870  749946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:19:30.453840  749946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1014 21:19:30.472551  749946 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:19:30.476718  749946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:19:30.489654  749946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:30.615406  749946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:19:30.648846  749946 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270 for IP: 192.168.85.2
	I1014 21:19:30.648878  749946 certs.go:195] generating shared ca certs ...
	I1014 21:19:30.648912  749946 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:30.649095  749946 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:19:30.649148  749946 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:19:30.649165  749946 certs.go:257] generating profile certs ...
	I1014 21:19:30.649235  749946 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key
	I1014 21:19:30.649276  749946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.crt with IP's: []
	I1014 21:19:31.453656  749946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.crt ...
	I1014 21:19:31.453689  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.crt: {Name:mkf63377695100a4877950fcb132b0a20e3ac7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:31.453896  749946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key ...
	I1014 21:19:31.453909  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key: {Name:mkcecb859e36887788ded56e4158975d3d654b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:31.454009  749946 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141
	I1014 21:19:31.454028  749946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt.df573141 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:19:32.856480  749946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt.df573141 ...
	I1014 21:19:32.856515  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt.df573141: {Name:mk91b883c5af5f0d783c93aa98c98398fa96ed9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:32.856731  749946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141 ...
	I1014 21:19:32.856747  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141: {Name:mkd202acbae828fbc1496081deb0fed3dc780a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:32.856848  749946 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt.df573141 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt
	I1014 21:19:32.856936  749946 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141 -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key
	I1014 21:19:32.856999  749946 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key
	I1014 21:19:32.857018  749946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt with IP's: []
	I1014 21:19:34.047191  749946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt ...
	I1014 21:19:34.047225  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt: {Name:mk487783cca0e8493b75607a7eb11a8215be8b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:34.047451  749946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key ...
	I1014 21:19:34.047467  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key: {Name:mk6b89fb20b0ea3b3690eae412aaca61248c45b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:34.047679  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:19:34.047732  749946 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:19:34.047745  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:19:34.047770  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:19:34.047797  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:19:34.047824  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:19:34.047885  749946 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:19:34.048560  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:19:34.069449  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:19:34.090781  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:19:34.111159  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:19:34.131016  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:19:34.150548  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:19:34.170227  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:19:34.190944  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:19:34.210499  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:19:34.229344  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:19:34.249142  749946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:19:34.268948  749946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:19:34.282891  749946 ssh_runner.go:195] Run: openssl version
	I1014 21:19:34.290776  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:19:34.299824  749946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:19:34.304223  749946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:19:34.304301  749946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:19:34.363040  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:19:34.375063  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:19:34.387115  749946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:19:34.391667  749946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:19:34.391748  749946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:19:34.445873  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:19:34.454800  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:19:34.467070  749946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:34.477159  749946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:34.477250  749946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:19:34.523819  749946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:19:34.532888  749946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:19:34.537404  749946 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:19:34.537459  749946 kubeadm.go:400] StartCluster: {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:19:34.537581  749946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:19:34.537655  749946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:19:34.568260  749946 cri.go:89] found id: ""
	I1014 21:19:34.568364  749946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:19:34.578883  749946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:19:34.587691  749946 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:19:34.587767  749946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:19:34.598700  749946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:19:34.598723  749946 kubeadm.go:157] found existing configuration files:
	
	I1014 21:19:34.598794  749946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:19:34.607778  749946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:19:34.607850  749946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:19:34.622185  749946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:19:34.631675  749946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:19:34.631752  749946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:19:34.644214  749946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:19:34.656168  749946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:19:34.656245  749946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:19:34.673083  749946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:19:34.685683  749946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:19:34.685775  749946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:19:34.695004  749946 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:19:34.796651  749946 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:19:34.797087  749946 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:19:34.847948  749946 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:19:34.848124  749946 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:19:34.848194  749946 kubeadm.go:318] OS: Linux
	I1014 21:19:34.848275  749946 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:19:34.848380  749946 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:19:34.848461  749946 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:19:34.848557  749946 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:19:34.848628  749946 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:19:34.848681  749946 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:19:34.848731  749946 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:19:34.848797  749946 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:19:34.848848  749946 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:19:35.000765  749946 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:19:35.000884  749946 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:19:35.000979  749946 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:19:35.013725  749946 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:19:35.941667  746423 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.0925889s
	I1014 21:19:35.019202  749946 out.go:252]   - Generating certificates and keys ...
	I1014 21:19:35.019394  749946 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:19:35.019532  749946 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:19:35.544013  749946 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:19:35.854773  749946 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:19:36.425364  749946 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:19:36.701609  749946 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:19:36.840656  749946 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:19:36.840861  749946 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-718270] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:19:37.375266  749946 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:19:37.380717  749946 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-718270] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:19:38.984659  746423 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.136180736s
	I1014 21:19:40.353685  746423 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.504940478s
	I1014 21:19:40.392164  746423 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:19:40.915754  746423 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:19:40.942836  746423 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:19:40.943350  746423 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-970287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:19:40.962478  746423 kubeadm.go:318] [bootstrap-token] Using token: ngedpe.1vmg9c4kv7151kj6
	I1014 21:19:40.965682  746423 out.go:252]   - Configuring RBAC rules ...
	I1014 21:19:40.965814  746423 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:19:40.981370  746423 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:19:40.995669  746423 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:19:41.005131  746423 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:19:41.011025  746423 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:19:41.017001  746423 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:19:41.109554  746423 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:19:41.556652  746423 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:19:42.124972  746423 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:19:42.126838  746423 kubeadm.go:318] 
	I1014 21:19:42.126926  746423 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:19:42.126940  746423 kubeadm.go:318] 
	I1014 21:19:42.127023  746423 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:19:42.127032  746423 kubeadm.go:318] 
	I1014 21:19:42.127061  746423 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:19:42.127710  746423 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:19:42.127789  746423 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:19:42.127800  746423 kubeadm.go:318] 
	I1014 21:19:42.127857  746423 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:19:42.127867  746423 kubeadm.go:318] 
	I1014 21:19:42.127928  746423 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:19:42.127940  746423 kubeadm.go:318] 
	I1014 21:19:42.127996  746423 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:19:42.128085  746423 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:19:42.128183  746423 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:19:42.128199  746423 kubeadm.go:318] 
	I1014 21:19:42.128687  746423 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:19:42.128791  746423 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:19:42.128803  746423 kubeadm.go:318] 
	I1014 21:19:42.129172  746423 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ngedpe.1vmg9c4kv7151kj6 \
	I1014 21:19:42.129292  746423 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:19:42.129562  746423 kubeadm.go:318] 	--control-plane 
	I1014 21:19:42.129579  746423 kubeadm.go:318] 
	I1014 21:19:42.129871  746423 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:19:42.129889  746423 kubeadm.go:318] 
	I1014 21:19:42.130258  746423 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ngedpe.1vmg9c4kv7151kj6 \
	I1014 21:19:42.130648  746423 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:19:42.145115  746423 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:19:42.145362  746423 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:19:42.145475  746423 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:19:42.145499  746423 cni.go:84] Creating CNI manager for ""
	I1014 21:19:42.145512  746423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:19:42.148671  746423 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:19:42.151706  746423 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:19:42.168179  746423 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:19:42.168203  746423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:19:42.187100  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 21:19:42.587471  746423 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:19:42.587610  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:42.587692  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-970287 minikube.k8s.io/updated_at=2025_10_14T21_19_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=no-preload-970287 minikube.k8s.io/primary=true
	I1014 21:19:42.843386  746423 ops.go:34] apiserver oom_adj: -16
	I1014 21:19:42.843498  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:43.344441  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:43.844122  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:39.755518  749946 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:19:41.588662  749946 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:19:41.688179  749946 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:19:41.692649  749946 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:19:42.030550  749946 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:19:42.331488  749946 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:19:42.925465  749946 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:19:43.450779  749946 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:19:44.421372  749946 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:19:44.421497  749946 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:19:44.425965  749946 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:19:44.429288  749946 out.go:252]   - Booting up control plane ...
	I1014 21:19:44.429401  749946 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:19:44.429487  749946 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:19:44.429563  749946 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:19:44.442577  749946 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:19:44.442717  749946 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:19:44.455906  749946 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:19:44.456022  749946 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:19:44.456070  749946 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:19:44.607329  749946 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:19:44.607497  749946 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:19:44.344424  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:44.844157  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:45.344808  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:45.844735  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:46.344440  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:46.843975  746423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:47.164860  746423 kubeadm.go:1113] duration metric: took 4.57729664s to wait for elevateKubeSystemPrivileges
	I1014 21:19:47.164891  746423 kubeadm.go:402] duration metric: took 25.746647788s to StartCluster
	I1014 21:19:47.164908  746423 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:47.164971  746423 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:19:47.165651  746423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:19:47.165860  746423 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:19:47.165963  746423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:19:47.166175  746423 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:19:47.166206  746423 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:19:47.166267  746423 addons.go:69] Setting storage-provisioner=true in profile "no-preload-970287"
	I1014 21:19:47.166280  746423 addons.go:238] Setting addon storage-provisioner=true in "no-preload-970287"
	I1014 21:19:47.166301  746423 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:19:47.166738  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:19:47.167022  746423 addons.go:69] Setting default-storageclass=true in profile "no-preload-970287"
	I1014 21:19:47.167038  746423 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970287"
	I1014 21:19:47.167288  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:19:47.169267  746423 out.go:179] * Verifying Kubernetes components...
	I1014 21:19:47.172185  746423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:19:47.211722  746423 addons.go:238] Setting addon default-storageclass=true in "no-preload-970287"
	I1014 21:19:47.211760  746423 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:19:47.212191  746423 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:19:47.217696  746423 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:19:47.220740  746423 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:19:47.220770  746423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:19:47.220841  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:19:47.244621  746423 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:19:47.244642  746423 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:19:47.244711  746423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:19:47.268407  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:19:47.291044  746423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:19:47.741028  746423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:19:47.785009  746423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:19:47.785148  746423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:19:47.877837  746423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:19:48.123848  746423 node_ready.go:35] waiting up to 6m0s for node "no-preload-970287" to be "Ready" ...
	I1014 21:19:48.691356  746423 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1014 21:19:49.080578  746423 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.202686929s)
	I1014 21:19:49.083173  746423 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 21:19:45.608275  749946 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001028071s
	I1014 21:19:45.611965  749946 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:19:45.612102  749946 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:19:45.612620  749946 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:19:45.612739  749946 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 21:19:49.085614  746423 addons.go:514] duration metric: took 1.919396116s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 21:19:49.195917  746423 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-970287" context rescaled to 1 replicas
	W1014 21:19:50.141901  746423 node_ready.go:57] node "no-preload-970287" has "Ready":"False" status (will retry)
	W1014 21:19:52.627094  746423 node_ready.go:57] node "no-preload-970287" has "Ready":"False" status (will retry)
	I1014 21:19:51.137223  749946 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.525017473s
	I1014 21:19:53.186206  749946 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.574271761s
	I1014 21:19:54.614032  749946 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.001829178s
	I1014 21:19:54.635577  749946 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:19:54.654214  749946 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:19:54.672296  749946 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:19:54.672533  749946 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-718270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:19:54.690435  749946 kubeadm.go:318] [bootstrap-token] Using token: zk2tvh.1prwd6qg5rwpiwec
	I1014 21:19:54.693608  749946 out.go:252]   - Configuring RBAC rules ...
	I1014 21:19:54.693742  749946 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:19:54.697550  749946 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:19:54.708337  749946 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:19:54.713008  749946 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:19:54.717268  749946 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:19:54.721981  749946 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:19:55.022086  749946 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:19:55.459712  749946 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:19:56.022870  749946 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:19:56.023981  749946 kubeadm.go:318] 
	I1014 21:19:56.024063  749946 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:19:56.024088  749946 kubeadm.go:318] 
	I1014 21:19:56.024169  749946 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:19:56.024175  749946 kubeadm.go:318] 
	I1014 21:19:56.024202  749946 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:19:56.024263  749946 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:19:56.024372  749946 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:19:56.024380  749946 kubeadm.go:318] 
	I1014 21:19:56.024436  749946 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:19:56.024441  749946 kubeadm.go:318] 
	I1014 21:19:56.024491  749946 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:19:56.024496  749946 kubeadm.go:318] 
	I1014 21:19:56.024550  749946 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:19:56.024628  749946 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:19:56.024699  749946 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:19:56.024704  749946 kubeadm.go:318] 
	I1014 21:19:56.024791  749946 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:19:56.024871  749946 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:19:56.024876  749946 kubeadm.go:318] 
	I1014 21:19:56.025230  749946 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zk2tvh.1prwd6qg5rwpiwec \
	I1014 21:19:56.025346  749946 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:19:56.025368  749946 kubeadm.go:318] 	--control-plane 
	I1014 21:19:56.025373  749946 kubeadm.go:318] 
	I1014 21:19:56.025462  749946 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:19:56.025467  749946 kubeadm.go:318] 
	I1014 21:19:56.025553  749946 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zk2tvh.1prwd6qg5rwpiwec \
	I1014 21:19:56.025660  749946 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:19:56.030032  749946 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:19:56.030275  749946 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:19:56.030390  749946 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:19:56.030412  749946 cni.go:84] Creating CNI manager for ""
	I1014 21:19:56.030425  749946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:19:56.033482  749946 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1014 21:19:54.627473  746423 node_ready.go:57] node "no-preload-970287" has "Ready":"False" status (will retry)
	W1014 21:19:57.127123  746423 node_ready.go:57] node "no-preload-970287" has "Ready":"False" status (will retry)
	I1014 21:19:56.036492  749946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:19:56.042135  749946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:19:56.042172  749946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:19:56.059299  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 21:19:56.412154  749946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:19:56.412382  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:56.412502  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-718270 minikube.k8s.io/updated_at=2025_10_14T21_19_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=newest-cni-718270 minikube.k8s.io/primary=true
	I1014 21:19:56.429351  749946 ops.go:34] apiserver oom_adj: -16
	I1014 21:19:56.582351  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:57.082889  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:57.583418  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:58.083289  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:58.582392  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:59.083114  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:19:59.582712  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:20:00.083349  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:20:00.583309  749946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:20:00.750774  749946 kubeadm.go:1113] duration metric: took 4.338495863s to wait for elevateKubeSystemPrivileges
	I1014 21:20:00.750806  749946 kubeadm.go:402] duration metric: took 26.213349671s to StartCluster
	I1014 21:20:00.750823  749946 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:00.750889  749946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:00.752101  749946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:00.752706  749946 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:00.752822  749946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:20:00.753134  749946 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:00.753182  749946 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:00.753271  749946 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718270"
	I1014 21:20:00.753290  749946 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718270"
	I1014 21:20:00.753301  749946 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718270"
	I1014 21:20:00.753317  749946 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:00.753323  749946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718270"
	I1014 21:20:00.753701  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:00.754041  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:00.759441  749946 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:00.764534  749946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:00.816728  749946 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:00.820521  749946 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:00.820544  749946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:00.820610  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:00.821295  749946 addons.go:238] Setting addon default-storageclass=true in "newest-cni-718270"
	I1014 21:20:00.821337  749946 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:00.821764  749946 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:00.862649  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:00.863400  749946 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:00.863446  749946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:20:00.863531  749946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:00.898918  749946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:01.179499  749946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:01.185049  749946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:01.185313  749946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:20:01.326898  749946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:02.114295  749946 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1014 21:20:02.115247  749946 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:20:02.115323  749946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:20:02.147850  749946 api_server.go:72] duration metric: took 1.395096961s to wait for apiserver process to appear ...
	I1014 21:20:02.147875  749946 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:20:02.147894  749946 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:20:02.166401  749946 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:20:02.167479  749946 api_server.go:141] control plane version: v1.34.1
	I1014 21:20:02.167539  749946 api_server.go:131] duration metric: took 19.655701ms to wait for apiserver health ...
	I1014 21:20:02.167563  749946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:20:02.176737  749946 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 21:20:02.178943  749946 system_pods.go:59] 9 kube-system pods found
	I1014 21:20:02.178983  749946 system_pods.go:61] "coredns-66bc5c9577-bdhlq" [b96a6307-6a92-458c-8cbf-c5c102940d5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:02.178993  749946 system_pods.go:61] "coredns-66bc5c9577-zzm45" [75daaaa8-0347-4edb-9b5d-5cf034f5915a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:02.179000  749946 system_pods.go:61] "etcd-newest-cni-718270" [f355740c-adf7-429e-a1da-7ce4543c4d04] Running
	I1014 21:20:02.179007  749946 system_pods.go:61] "kindnet-vdqbd" [5e5738b0-840c-4264-b49e-f9e41a70647e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1014 21:20:02.179012  749946 system_pods.go:61] "kube-apiserver-newest-cni-718270" [d2e80a6a-1be9-4d68-8c1f-ca465eeb82ad] Running
	I1014 21:20:02.179017  749946 system_pods.go:61] "kube-controller-manager-newest-cni-718270" [66df72c2-6625-4150-bedb-4fbf0594354f] Running
	I1014 21:20:02.179023  749946 system_pods.go:61] "kube-proxy-97kps" [c9808191-fd7e-4064-a7a4-f8d026004ef8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:20:02.179033  749946 system_pods.go:61] "kube-scheduler-newest-cni-718270" [815541f5-8b42-4b45-b84d-3c1d72b0060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:02.179045  749946 system_pods.go:61] "storage-provisioner" [f93166f5-1e47-44fd-b4b9-95957da83455] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:02.179051  749946 system_pods.go:74] duration metric: took 11.469489ms to wait for pod list to return data ...
	I1014 21:20:02.179064  749946 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:20:02.179549  749946 addons.go:514] duration metric: took 1.426351967s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 21:20:02.187169  749946 default_sa.go:45] found service account: "default"
	I1014 21:20:02.187237  749946 default_sa.go:55] duration metric: took 8.165387ms for default service account to be created ...
	I1014 21:20:02.187264  749946 kubeadm.go:586] duration metric: took 1.434516912s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:02.187315  749946 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:20:02.195145  749946 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:20:02.195226  749946 node_conditions.go:123] node cpu capacity is 2
	I1014 21:20:02.195253  749946 node_conditions.go:105] duration metric: took 7.914182ms to run NodePressure ...
	I1014 21:20:02.195294  749946 start.go:241] waiting for startup goroutines ...
	I1014 21:20:02.621101  749946 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-718270" context rescaled to 1 replicas
	I1014 21:20:02.621142  749946 start.go:246] waiting for cluster config update ...
	I1014 21:20:02.621164  749946 start.go:255] writing updated cluster config ...
	I1014 21:20:02.621486  749946 ssh_runner.go:195] Run: rm -f paused
	I1014 21:20:02.697941  749946 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:20:02.701079  749946 out.go:179] * Done! kubectl is now configured to use "newest-cni-718270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.415457944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.423699083Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eb4ea06c-2c6f-4a1b-8e81-19e5c47b4ca1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.428926145Z" level=info msg="Ran pod sandbox 49e227028f33880a883ae3daed41da38cd030f41cf25131275e15f4626eb06b2 with infra container: kube-system/kindnet-vdqbd/POD" id=eb4ea06c-2c6f-4a1b-8e81-19e5c47b4ca1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.432856384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b202eee-4f70-4720-b8b9-bde4a8c4b57e name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.435796548Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-97kps/POD" id=a19947d0-d85d-4b30-bff9-3a81c6a490ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.436071179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.447947552Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a19947d0-d85d-4b30-bff9-3a81c6a490ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.44829892Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3937104f-15ad-4a95-8012-be1be1853b97 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.452400814Z" level=info msg="Ran pod sandbox 6936111ba23f2e301666f2b1f464cc678bfe17b8fd76fd370d5fca59d2aab6e5 with infra container: kube-system/kube-proxy-97kps/POD" id=a19947d0-d85d-4b30-bff9-3a81c6a490ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.45577164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9988c898-c6b0-4639-8e9d-4aa2610a4b09 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.462129844Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f2432be8-ad67-4437-886a-e20810693697 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.463701651Z" level=info msg="Creating container: kube-system/kindnet-vdqbd/kindnet-cni" id=27238fe3-b269-4487-bab4-c1f3d6dd8a35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.464215582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.471458841Z" level=info msg="Creating container: kube-system/kube-proxy-97kps/kube-proxy" id=42e28adc-17e6-48ab-ba8c-79629f1a6e9a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.47210898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.47515299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.477109477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.481665494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.482421243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.510935581Z" level=info msg="Created container 6e36df57c1363721995ec1bf43f791ffd15b777247908e395b08e3bac10183df: kube-system/kindnet-vdqbd/kindnet-cni" id=27238fe3-b269-4487-bab4-c1f3d6dd8a35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.519165167Z" level=info msg="Starting container: 6e36df57c1363721995ec1bf43f791ffd15b777247908e395b08e3bac10183df" id=8d8f570b-0999-4cad-a126-714a45edb8cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.533712103Z" level=info msg="Started container" PID=1506 containerID=6e36df57c1363721995ec1bf43f791ffd15b777247908e395b08e3bac10183df description=kube-system/kindnet-vdqbd/kindnet-cni id=8d8f570b-0999-4cad-a126-714a45edb8cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=49e227028f33880a883ae3daed41da38cd030f41cf25131275e15f4626eb06b2
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.54252374Z" level=info msg="Created container c9ef4b8578a283140d2d13c6e3c0c0aaddc4609f8a08c1cb3c32e7d7df82a11c: kube-system/kube-proxy-97kps/kube-proxy" id=42e28adc-17e6-48ab-ba8c-79629f1a6e9a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.544762243Z" level=info msg="Starting container: c9ef4b8578a283140d2d13c6e3c0c0aaddc4609f8a08c1cb3c32e7d7df82a11c" id=de2159ba-69a8-4fb3-b865-9c54d9fbe743 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:01 newest-cni-718270 crio[840]: time="2025-10-14T21:20:01.551386627Z" level=info msg="Started container" PID=1510 containerID=c9ef4b8578a283140d2d13c6e3c0c0aaddc4609f8a08c1cb3c32e7d7df82a11c description=kube-system/kube-proxy-97kps/kube-proxy id=de2159ba-69a8-4fb3-b865-9c54d9fbe743 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6936111ba23f2e301666f2b1f464cc678bfe17b8fd76fd370d5fca59d2aab6e5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c9ef4b8578a28       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   6936111ba23f2       kube-proxy-97kps                            kube-system
	6e36df57c1363       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   49e227028f338       kindnet-vdqbd                               kube-system
	bd6ec79fb24a7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      0                   223f2fdbf3303       etcd-newest-cni-718270                      kube-system
	608b69614a16d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            0                   708ce65ae148a       kube-scheduler-newest-cni-718270            kube-system
	49dedd992488e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            0                   b361292a240eb       kube-apiserver-newest-cni-718270            kube-system
	a26d2f806ca12       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   0                   a01f2a331b72b       kube-controller-manager-newest-cni-718270   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=newest-cni-718270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718270
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:19:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:19:55 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:19:55 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:19:55 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 14 Oct 2025 21:19:55 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf6959d524224e128b7f7baf30bae45d
	  System UUID:                a5834ddd-d2a7-4491-bd6d-465132e0fdf2
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718270                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-vdqbd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-718270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-718270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-97kps                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-718270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-718270 event: Registered Node newest-cni-718270 in Controller
	
	
	==> dmesg <==
	[Oct14 20:48] overlayfs: idmapped layers are currently not supported
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bd6ec79fb24a7fc2cd55e7b9119960ad4bd9706183a020e9cba4d41941d7f382] <==
	{"level":"warn","ts":"2025-10-14T21:19:50.283665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.309873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.366851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.404487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.453509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.519705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.560542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.604076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.642061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.688859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.761332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.787737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.826987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.867265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.906412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.952061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:50.986158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.029749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.085485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.185222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.208887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.278722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.313724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.350277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:51.516491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44250","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:20:04 up  4:02,  0 user,  load average: 4.83, 3.13, 2.24
	Linux newest-cni-718270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e36df57c1363721995ec1bf43f791ffd15b777247908e395b08e3bac10183df] <==
	I1014 21:20:01.677187       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:20:01.677929       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:20:01.678056       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:20:01.678068       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:20:01.678084       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:20:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:20:01.894487       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:20:01.894504       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:20:01.894513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:20:01.894776       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [49dedd992488e39107def05d028693f4cdfedd0cc81adff772b973d27a3fd192] <==
	I1014 21:19:53.128781       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:19:53.146670       1 controller.go:667] quota admission added evaluator for: namespaces
	E1014 21:19:53.164594       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 21:19:53.223807       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:53.250845       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1014 21:19:53.261574       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:53.262076       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:19:53.368797       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:19:53.630021       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 21:19:53.638839       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 21:19:53.638862       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:19:54.455975       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:19:54.511787       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:19:54.650477       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 21:19:54.665364       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1014 21:19:54.666989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:19:54.680749       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:19:55.144514       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:19:55.432259       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:19:55.458337       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 21:19:55.468395       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:20:00.420002       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1014 21:20:00.802323       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:20:01.202095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:20:01.215712       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a26d2f806ca12fdab6e8add068a938b6287781c84b471b9fce50c810522e75c5] <==
	I1014 21:20:00.343309       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:20:00.345867       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:00.346168       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:20:00.346205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:20:00.351765       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 21:20:00.355008       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:20:00.361910       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:20:00.369059       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-718270" podCIDRs=["10.42.0.0/24"]
	I1014 21:20:00.373792       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:20:00.387610       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:20:00.388434       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:20:00.388835       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:20:00.389016       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:20:00.397670       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:20:00.397813       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:20:00.398409       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:20:00.399078       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1014 21:20:00.400152       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 21:20:00.400185       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:20:00.403942       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:20:00.404711       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:00.437325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:00.437358       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:20:00.437366       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 21:20:00.498376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c9ef4b8578a283140d2d13c6e3c0c0aaddc4609f8a08c1cb3c32e7d7df82a11c] <==
	I1014 21:20:01.630556       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:20:01.715488       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:20:01.815747       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:20:01.815807       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:20:01.815957       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:20:01.843160       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:20:01.846958       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:20:01.852737       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:20:01.853129       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:20:01.853205       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:01.858826       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:20:01.858852       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:20:01.859168       1 config.go:200] "Starting service config controller"
	I1014 21:20:01.859182       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:20:01.864648       1 config.go:309] "Starting node config controller"
	I1014 21:20:01.864667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:20:01.864675       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:20:01.865133       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:20:01.865142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:20:01.959261       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:20:01.959336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 21:20:01.966619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [608b69614a16d8a9500fcbd309790e576f00238c081007a25bd309fd0fbbaadf] <==
	I1014 21:19:53.145538       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:19:53.150497       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1014 21:19:53.177800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:19:53.180133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:19:53.199990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:19:53.200504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:19:53.200641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:19:53.200747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:19:53.200838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:19:53.201016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:19:53.201116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:19:53.201205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 21:19:53.201301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:19:53.201394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:19:53.201487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:19:53.201569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:19:53.201658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:19:53.204203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:19:53.204351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:19:53.204621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:19:53.204728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:19:54.065040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:19:54.078687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:19:54.261664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1014 21:19:57.144491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: I1014 21:19:56.515707    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: E1014 21:19:56.529016    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718270\" already exists" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: I1014 21:19:56.529231    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-718270" podStartSLOduration=1.529215605 podStartE2EDuration="1.529215605s" podCreationTimestamp="2025-10-14 21:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:19:56.5043477 +0000 UTC m=+1.262594405" watchObservedRunningTime="2025-10-14 21:19:56.529215605 +0000 UTC m=+1.287462294"
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: I1014 21:19:56.559615    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-718270" podStartSLOduration=1.55959537 podStartE2EDuration="1.55959537s" podCreationTimestamp="2025-10-14 21:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:19:56.530435281 +0000 UTC m=+1.288681987" watchObservedRunningTime="2025-10-14 21:19:56.55959537 +0000 UTC m=+1.317842067"
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: I1014 21:19:56.576032    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-718270" podStartSLOduration=1.576012392 podStartE2EDuration="1.576012392s" podCreationTimestamp="2025-10-14 21:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:19:56.560113428 +0000 UTC m=+1.318360142" watchObservedRunningTime="2025-10-14 21:19:56.576012392 +0000 UTC m=+1.334259081"
	Oct 14 21:19:56 newest-cni-718270 kubelet[1321]: I1014 21:19:56.598738    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-718270" podStartSLOduration=1.59870632 podStartE2EDuration="1.59870632s" podCreationTimestamp="2025-10-14 21:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:19:56.576823987 +0000 UTC m=+1.335070684" watchObservedRunningTime="2025-10-14 21:19:56.59870632 +0000 UTC m=+1.356953017"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.425385    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.430342    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.546330    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-xtables-lock\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.546778    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l849s\" (UniqueName: \"kubernetes.io/projected/5e5738b0-840c-4264-b49e-f9e41a70647e-kube-api-access-l849s\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.546973    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp968\" (UniqueName: \"kubernetes.io/projected/c9808191-fd7e-4064-a7a4-f8d026004ef8-kube-api-access-xp968\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.547115    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-cni-cfg\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.547267    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9808191-fd7e-4064-a7a4-f8d026004ef8-kube-proxy\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.547400    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-lib-modules\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.547540    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-lib-modules\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: I1014 21:20:00.547701    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-xtables-lock\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.678573    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.678615    1321 projected.go:196] Error preparing data for projected volume kube-api-access-xp968 for pod kube-system/kube-proxy-97kps: configmap "kube-root-ca.crt" not found
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.678746    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9808191-fd7e-4064-a7a4-f8d026004ef8-kube-api-access-xp968 podName:c9808191-fd7e-4064-a7a4-f8d026004ef8 nodeName:}" failed. No retries permitted until 2025-10-14 21:20:01.178690232 +0000 UTC m=+5.936936920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xp968" (UniqueName: "kubernetes.io/projected/c9808191-fd7e-4064-a7a4-f8d026004ef8-kube-api-access-xp968") pod "kube-proxy-97kps" (UID: "c9808191-fd7e-4064-a7a4-f8d026004ef8") : configmap "kube-root-ca.crt" not found
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.684460    1321 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.684518    1321 projected.go:196] Error preparing data for projected volume kube-api-access-l849s for pod kube-system/kindnet-vdqbd: configmap "kube-root-ca.crt" not found
	Oct 14 21:20:00 newest-cni-718270 kubelet[1321]: E1014 21:20:00.684638    1321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e5738b0-840c-4264-b49e-f9e41a70647e-kube-api-access-l849s podName:5e5738b0-840c-4264-b49e-f9e41a70647e nodeName:}" failed. No retries permitted until 2025-10-14 21:20:01.184613202 +0000 UTC m=+5.942859899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l849s" (UniqueName: "kubernetes.io/projected/5e5738b0-840c-4264-b49e-f9e41a70647e-kube-api-access-l849s") pod "kindnet-vdqbd" (UID: "5e5738b0-840c-4264-b49e-f9e41a70647e") : configmap "kube-root-ca.crt" not found
	Oct 14 21:20:01 newest-cni-718270 kubelet[1321]: I1014 21:20:01.255051    1321 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:20:02 newest-cni-718270 kubelet[1321]: I1014 21:20:02.596717    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-97kps" podStartSLOduration=2.596684513 podStartE2EDuration="2.596684513s" podCreationTimestamp="2025-10-14 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:20:02.582461769 +0000 UTC m=+7.340708458" watchObservedRunningTime="2025-10-14 21:20:02.596684513 +0000 UTC m=+7.354931202"
	Oct 14 21:20:04 newest-cni-718270 kubelet[1321]: I1014 21:20:04.459645    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vdqbd" podStartSLOduration=4.459622046 podStartE2EDuration="4.459622046s" podCreationTimestamp="2025-10-14 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:20:02.599042757 +0000 UTC m=+7.357289470" watchObservedRunningTime="2025-10-14 21:20:04.459622046 +0000 UTC m=+9.217868743"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bdhlq storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner: exit status 1 (87.099155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bdhlq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (334.0506ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-970287 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-970287 describe deploy/metrics-server -n kube-system: exit status 1 (128.920003ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-970287 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-970287
helpers_test.go:243: (dbg) docker inspect no-preload-970287:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	        "Created": "2025-10-14T21:18:55.141228098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 746733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:18:55.230100139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hosts",
	        "LogPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497-json.log",
	        "Name": "/no-preload-970287",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970287:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970287",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	                "LowerDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970287",
	                "Source": "/var/lib/docker/volumes/no-preload-970287/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970287",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970287",
	                "name.minikube.sigs.k8s.io": "no-preload-970287",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b52b1358d5dc39085ddfab2ff8e10c01a377943a5f55cf9faf46a4dd1a1fc7c",
	            "SandboxKey": "/var/run/docker/netns/0b52b1358d5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970287": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:b9:c1:17:ac:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "006eb97cac5b79644e6bd32a155d4cd1ef5c55fddbcd2356a8ffe605bdf15f5f",
	                    "EndpointID": "12deea865e5a630d6580b6b6626eac74b26ddf21b963688896e67fcbae82eb8e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970287",
	                        "9ce7c6b3a1e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970287 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-970287 logs -n 25: (1.872257313s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:16 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-319221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-319221 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p newest-cni-718270 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:20:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:20:07.134354  754019 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:07.134685  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.134703  754019 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:07.134713  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.135093  754019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:07.135610  754019 out.go:368] Setting JSON to false
	I1014 21:20:07.136773  754019 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14558,"bootTime":1760462250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:20:07.136855  754019 start.go:141] virtualization:  
	I1014 21:20:07.140373  754019 out.go:179] * [newest-cni-718270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:20:07.144963  754019 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:20:07.145035  754019 notify.go:220] Checking for updates...
	I1014 21:20:07.156463  754019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:20:07.162126  754019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:07.166245  754019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:20:07.169573  754019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:20:07.172540  754019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:20:07.176080  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:07.176799  754019 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:20:07.207727  754019 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:20:07.207914  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.272227  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.262374652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.272432  754019 docker.go:318] overlay module found
	I1014 21:20:07.275576  754019 out.go:179] * Using the docker driver based on existing profile
	I1014 21:20:07.278402  754019 start.go:305] selected driver: docker
	I1014 21:20:07.278421  754019 start.go:925] validating driver "docker" against &{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.278544  754019 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:20:07.279261  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.335404  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.326146898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.335753  754019 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:07.335797  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:07.335873  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:07.335916  754019 start.go:349] cluster config:
	{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.339032  754019 out.go:179] * Starting "newest-cni-718270" primary control-plane node in "newest-cni-718270" cluster
	I1014 21:20:07.341800  754019 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:20:07.344678  754019 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:20:07.347442  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:07.347502  754019 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:20:07.347516  754019 cache.go:58] Caching tarball of preloaded images
	I1014 21:20:07.347514  754019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:20:07.347606  754019 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:20:07.347617  754019 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:20:07.347741  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.366876  754019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:20:07.366900  754019 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:20:07.366922  754019 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:20:07.366947  754019 start.go:360] acquireMachinesLock for newest-cni-718270: {Name:mk4b94f993721098e6e50976108704497d217d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:20:07.367004  754019 start.go:364] duration metric: took 34.446µs to acquireMachinesLock for "newest-cni-718270"
	I1014 21:20:07.367027  754019 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:20:07.367044  754019 fix.go:54] fixHost starting: 
	I1014 21:20:07.367295  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.403346  754019 fix.go:112] recreateIfNeeded on newest-cni-718270: state=Stopped err=<nil>
	W1014 21:20:07.403386  754019 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:20:07.406806  754019 out.go:252] * Restarting existing docker container for "newest-cni-718270" ...
	I1014 21:20:07.406910  754019 cli_runner.go:164] Run: docker start newest-cni-718270
	I1014 21:20:07.710141  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.732519  754019 kic.go:430] container "newest-cni-718270" state is running.
	I1014 21:20:07.732946  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:07.757790  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.758026  754019 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:07.758090  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:07.780260  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:07.780635  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:07.780652  754019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:07.782615  754019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 21:20:10.928440  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:10.928514  754019 ubuntu.go:182] provisioning hostname "newest-cni-718270"
	I1014 21:20:10.928598  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:10.946074  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:10.946408  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:10.946425  754019 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718270 && echo "newest-cni-718270" | sudo tee /etc/hostname
	I1014 21:20:11.111671  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:11.111755  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:11.132960  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:11.133268  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:11.133292  754019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:11.276674  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:11.276703  754019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:11.276733  754019 ubuntu.go:190] setting up certificates
	I1014 21:20:11.276743  754019 provision.go:84] configureAuth start
	I1014 21:20:11.276806  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:11.294451  754019 provision.go:143] copyHostCerts
	I1014 21:20:11.294580  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:11.294605  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:11.294689  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:11.294795  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:11.294807  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:11.294834  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:11.294892  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:11.294901  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:11.294924  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:11.294973  754019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718270 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718270]
	I1014 21:20:12.196847  754019 provision.go:177] copyRemoteCerts
	I1014 21:20:12.196918  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:12.196958  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.216799  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.322441  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:12.340555  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:12.358874  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:20:12.376881  754019 provision.go:87] duration metric: took 1.100112396s to configureAuth
	I1014 21:20:12.376904  754019 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:12.377093  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:12.377212  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.394315  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:12.394623  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:12.394643  754019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:12.701850  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:12.701872  754019 machine.go:96] duration metric: took 4.943837245s to provisionDockerMachine
	I1014 21:20:12.701883  754019 start.go:293] postStartSetup for "newest-cni-718270" (driver="docker")
	I1014 21:20:12.701893  754019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:12.701954  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:12.702008  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.720190  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.824501  754019 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:12.828073  754019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:12.828116  754019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:12.828128  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:12.828183  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:12.828269  754019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:12.828398  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:12.836051  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:12.855105  754019 start.go:296] duration metric: took 153.205972ms for postStartSetup
	I1014 21:20:12.855219  754019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:12.855273  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.874608  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.979148  754019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:12.983745  754019 fix.go:56] duration metric: took 5.616700881s for fixHost
	I1014 21:20:12.983772  754019 start.go:83] releasing machines lock for "newest-cni-718270", held for 5.616755946s
	I1014 21:20:12.983843  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:13.000549  754019 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:13.000620  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.000872  754019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:13.000930  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.020193  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.028428  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.120200  754019 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:13.210756  754019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:13.247270  754019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:13.251931  754019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:13.252004  754019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:13.260216  754019 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:20:13.260291  754019 start.go:495] detecting cgroup driver to use...
	I1014 21:20:13.260416  754019 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:13.260469  754019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:13.275982  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:13.288957  754019 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:13.289022  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:13.305079  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:13.319186  754019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:13.441089  754019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:13.579697  754019 docker.go:234] disabling docker service ...
	I1014 21:20:13.579808  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:13.596746  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:13.611616  754019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:13.737848  754019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:13.856259  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:13.871464  754019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:13.887612  754019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:13.887681  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.898094  754019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:13.898174  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.908629  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.918430  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.928660  754019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:13.937754  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.947219  754019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.956774  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.967169  754019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:13.975143  754019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:13.982785  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.116745  754019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:14.263711  754019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:14.263858  754019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:14.268618  754019 start.go:563] Will wait 60s for crictl version
	I1014 21:20:14.268687  754019 ssh_runner.go:195] Run: which crictl
	I1014 21:20:14.272565  754019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:14.298294  754019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:14.298466  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.330916  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.366083  754019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:14.368727  754019 cli_runner.go:164] Run: docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:14.390570  754019 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:14.394557  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.406994  754019 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 21:20:14.409789  754019 kubeadm.go:883] updating cluster {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:14.409934  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:14.410012  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.445834  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.445860  754019 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:20:14.445916  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.471310  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.471336  754019 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:14.471344  754019 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:14.471438  754019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:14.471523  754019 ssh_runner.go:195] Run: crio config
	I1014 21:20:14.526158  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:14.526190  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:14.526212  754019 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1014 21:20:14.526235  754019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718270 NodeName:newest-cni-718270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:14.526466  754019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:14.526556  754019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:14.533974  754019 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:14.534061  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:14.541524  754019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:20:14.554814  754019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:14.567474  754019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1014 21:20:14.581058  754019 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:14.584758  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.594932  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.716570  754019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:14.734870  754019 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270 for IP: 192.168.85.2
	I1014 21:20:14.734892  754019 certs.go:195] generating shared ca certs ...
	I1014 21:20:14.734909  754019 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:14.735067  754019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:14.735116  754019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:14.735129  754019 certs.go:257] generating profile certs ...
	I1014 21:20:14.735217  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key
	I1014 21:20:14.735293  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141
	I1014 21:20:14.735335  754019 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key
	I1014 21:20:14.735462  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:14.735498  754019 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:14.735509  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:14.735533  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:14.735558  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:14.735586  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:14.735637  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:14.736438  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:14.758409  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:14.776281  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:14.793679  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:14.811490  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:20:14.837714  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:20:14.864140  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:14.889890  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:20:14.920159  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:14.947308  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:14.970838  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:14.989166  754019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:15.004057  754019 ssh_runner.go:195] Run: openssl version
	I1014 21:20:15.015587  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:15.027423  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034441  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034578  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.078626  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:15.088718  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:15.098290  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102477  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102592  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.144904  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:15.153585  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:15.162311  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166491  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166581  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.216090  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:15.224652  754019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:15.230727  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:20:15.280805  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:20:15.338649  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:20:15.409894  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:20:15.484821  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:20:15.556404  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:20:15.677557  754019 kubeadm.go:400] StartCluster: {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:15.677653  754019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:15.677725  754019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:15.774849  754019 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:15.774872  754019 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:15.774877  754019 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:15.774881  754019 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:15.774884  754019 cri.go:89] found id: ""
	I1014 21:20:15.774940  754019 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:20:15.797126  754019 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:15Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:15.797207  754019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:15.827155  754019 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:20:15.827177  754019 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:20:15.827227  754019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:20:15.843285  754019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:20:15.843857  754019 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718270" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.844090  754019 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718270" cluster setting kubeconfig missing "newest-cni-718270" context setting]
	I1014 21:20:15.844829  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.848965  754019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:20:15.885698  754019 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 21:20:15.885729  754019 kubeadm.go:601] duration metric: took 58.544655ms to restartPrimaryControlPlane
	I1014 21:20:15.885737  754019 kubeadm.go:402] duration metric: took 208.198673ms to StartCluster
	I1014 21:20:15.885751  754019 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.885816  754019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.886774  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.886985  754019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:15.887395  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:15.887445  754019 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:15.887511  754019 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718270"
	I1014 21:20:15.887525  754019 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718270"
	W1014 21:20:15.887531  754019 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:20:15.887551  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.888069  754019 addons.go:69] Setting dashboard=true in profile "newest-cni-718270"
	I1014 21:20:15.888087  754019 addons.go:238] Setting addon dashboard=true in "newest-cni-718270"
	W1014 21:20:15.888093  754019 addons.go:247] addon dashboard should already be in state true
	I1014 21:20:15.888122  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.891768  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.892157  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.899140  754019 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718270"
	I1014 21:20:15.899179  754019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718270"
	I1014 21:20:15.899515  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.900224  754019 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:15.905445  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:16.011967  754019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:16.012056  754019 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:20:16.015069  754019 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:16.015094  754019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:16.015157  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.019125  754019 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	
	
	==> CRI-O <==
	Oct 14 21:20:04 no-preload-970287 crio[838]: time="2025-10-14T21:20:04.170886111Z" level=info msg="Created container ee27b1cde144762240d9d90efa93043d2adbf190d491c1e966fdb7c0bc53a80b: kube-system/coredns-66bc5c9577-m627n/coredns" id=5d4ce998-9436-46a1-b08c-fcdfe111e04d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:04 no-preload-970287 crio[838]: time="2025-10-14T21:20:04.1732211Z" level=info msg="Starting container: ee27b1cde144762240d9d90efa93043d2adbf190d491c1e966fdb7c0bc53a80b" id=4c638ad0-a41a-43b9-887c-7a4923d01473 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:04 no-preload-970287 crio[838]: time="2025-10-14T21:20:04.183184244Z" level=info msg="Started container" PID=2473 containerID=ee27b1cde144762240d9d90efa93043d2adbf190d491c1e966fdb7c0bc53a80b description=kube-system/coredns-66bc5c9577-m627n/coredns id=4c638ad0-a41a-43b9-887c-7a4923d01473 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e571659ad20619021527335eba4a8a2ca13b02380467a13be363a2a89f78bd7
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.469616857Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a95b531a-dd3a-4014-8604-02590e890cba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.46969571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.487167645Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470 UID:9b46c18b-db2a-4ed6-9a6a-03b201869a09 NetNS:/var/run/netns/9896dbd8-1a5d-49ff-a6dc-ae7787d4dc50 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e8}] Aliases:map[]}"
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.48735699Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.504929431Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470 UID:9b46c18b-db2a-4ed6-9a6a-03b201869a09 NetNS:/var/run/netns/9896dbd8-1a5d-49ff-a6dc-ae7787d4dc50 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e8}] Aliases:map[]}"
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.50525208Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.51369456Z" level=info msg="Ran pod sandbox aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470 with infra container: default/busybox/POD" id=a95b531a-dd3a-4014-8604-02590e890cba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.514813098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=afdbf85c-e46f-405b-b0c7-a89d1aa004e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.515036339Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=afdbf85c-e46f-405b-b0c7-a89d1aa004e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.515145806Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=afdbf85c-e46f-405b-b0c7-a89d1aa004e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.518044902Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5f981100-ccf2-474a-ba27-aaabea0dba69 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:20:07 no-preload-970287 crio[838]: time="2025-10-14T21:20:07.523089675Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.516219695Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5f981100-ccf2-474a-ba27-aaabea0dba69 name=/runtime.v1.ImageService/PullImage
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.516942007Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a2d5fc7-151e-4e76-9d3e-d6381b9244e8 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.518626178Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6427deda-1d93-4ff9-95da-8812f0d1236e name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.525705561Z" level=info msg="Creating container: default/busybox/busybox" id=6e614768-c8ff-4537-bff3-9c2297bbbc16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.52652856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.537351758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.538027382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.554721282Z" level=info msg="Created container e920f9146c8f7fea9e588bcf8056f8b74611007b662c84e9cbaa10da190fe1e2: default/busybox/busybox" id=6e614768-c8ff-4537-bff3-9c2297bbbc16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.555752687Z" level=info msg="Starting container: e920f9146c8f7fea9e588bcf8056f8b74611007b662c84e9cbaa10da190fe1e2" id=563c2025-6a7c-42ce-815f-9af1d0be23c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:09 no-preload-970287 crio[838]: time="2025-10-14T21:20:09.557548261Z" level=info msg="Started container" PID=2532 containerID=e920f9146c8f7fea9e588bcf8056f8b74611007b662c84e9cbaa10da190fe1e2 description=default/busybox/busybox id=563c2025-6a7c-42ce-815f-9af1d0be23c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e920f9146c8f7       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   aaf25210a48f0       busybox                                     default
	ee27b1cde1447       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   0e571659ad206       coredns-66bc5c9577-m627n                    kube-system
	c5a6bc25bc24f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   28b12b5f382a8       storage-provisioner                         kube-system
	15013baf20cf5       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   99428304b8e52       kindnet-tps4w                               kube-system
	b9de27b5f727f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   3b57741505a29       kube-proxy-c4vlq                            kube-system
	b6827cf87037a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   a63efa3bbe3a4       kube-controller-manager-no-preload-970287   kube-system
	5e06cf9584e2a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   d9c6ae680d516       kube-scheduler-no-preload-970287            kube-system
	c8c6956ebb3e0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   8e9a2ac2ba89e       etcd-no-preload-970287                      kube-system
	40fc742be94b8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   877557de6488a       kube-apiserver-no-preload-970287            kube-system
	
	
	==> coredns [ee27b1cde144762240d9d90efa93043d2adbf190d491c1e966fdb7c0bc53a80b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53856 - 51805 "HINFO IN 2846331050983365500.2516683197644504410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024179486s
	
	
	==> describe nodes <==
	Name:               no-preload-970287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-970287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=no-preload-970287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-970287
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:20:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:20:12 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:20:12 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:20:12 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:20:12 +0000   Tue, 14 Oct 2025 21:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-970287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bff15af05a44629bf4ef9658190571b
	  System UUID:                739e9d19-3ca7-40c2-b50a-6b77f8bfdd3e
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-m627n                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-970287                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-tps4w                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-970287             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-no-preload-970287    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-c4vlq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-970287             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-970287 event: Registered Node no-preload-970287 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-970287 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	[Oct14 21:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8c6956ebb3e09f587417b94e976752e674a54c123070530f95d3357c5c16a72] <==
	{"level":"warn","ts":"2025-10-14T21:19:35.300626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.341499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.394403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.432547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.476550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.522931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.574542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.609739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.653330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.728448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.764424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.814633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.838798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.879660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.937767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.964100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:35.997180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.036443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.051163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.083408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.102606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.144624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.181824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.201228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:19:36.338272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:20:17 up  4:02,  0 user,  load average: 5.27, 3.29, 2.30
	Linux no-preload-970287 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [15013baf20cf5a95034bdee86f55d5cbd3f61e45ed3265f09dab53d0df927a70] <==
	I1014 21:19:53.373817       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:19:53.374078       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:19:53.374203       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:19:53.374220       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:19:53.374233       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:19:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:19:53.575620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:19:53.575720       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:19:53.575754       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:19:53.576268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1014 21:19:53.775944       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:19:53.776053       1 metrics.go:72] Registering metrics
	I1014 21:19:53.776143       1 controller.go:711] "Syncing nftables rules"
	I1014 21:20:03.580439       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:20:03.580478       1 main.go:301] handling current node
	I1014 21:20:13.572999       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:20:13.573036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [40fc742be94b8663094bca8a86e7415b92b1bc63eb9ca6f29cf6870f77171225] <==
	I1014 21:19:38.118521       1 controller.go:667] quota admission added evaluator for: namespaces
	E1014 21:19:38.194002       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 21:19:38.202722       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:38.203223       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1014 21:19:38.281861       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:38.282160       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:19:38.422210       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:19:38.518450       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 21:19:38.552686       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 21:19:38.561384       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:19:39.815274       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:19:39.863180       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:19:39.997118       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 21:19:40.021387       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1014 21:19:40.023164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:19:40.038606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:19:40.806128       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:19:41.526324       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:19:41.554291       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 21:19:41.577023       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:19:46.043972       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:19:46.659248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:46.666118       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:19:46.963120       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1014 21:20:15.357828       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46868: use of closed network connection
	
	
	==> kube-controller-manager [b6827cf87037af9e4d184bf4100878a2e13a94fadd577ef863c94806d069fdb1] <==
	I1014 21:19:45.914869       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 21:19:45.915068       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:19:45.915253       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:19:45.915379       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1014 21:19:45.916171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 21:19:45.916594       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:19:45.917957       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:19:45.918414       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:19:45.915118       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 21:19:45.915135       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1014 21:19:45.915147       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:19:45.922083       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1014 21:19:45.925532       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-970287" podCIDRs=["10.244.0.0/24"]
	I1014 21:19:45.932159       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:19:45.934889       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:19:45.942163       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:19:45.942248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:19:45.942277       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 21:19:45.948078       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:19:45.948252       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:19:45.948377       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-970287"
	I1014 21:19:45.948443       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 21:19:45.953633       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:19:45.980743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:05.950719       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b9de27b5f727f6467ab45e6256cf147d4e251b7712de20f1fcbc247226ce5ffa] <==
	I1014 21:19:49.216442       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:19:49.325422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:19:49.425862       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:19:49.425952       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:19:49.426021       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:19:49.504204       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:19:49.504365       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:19:49.527352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:19:49.534008       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:19:49.534042       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:19:49.536098       1 config.go:200] "Starting service config controller"
	I1014 21:19:49.536185       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:19:49.536232       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:19:49.536261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:19:49.536305       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:19:49.536359       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:19:49.541214       1 config.go:309] "Starting node config controller"
	I1014 21:19:49.541303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:19:49.541336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:19:49.637032       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:19:49.637084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:19:49.637115       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5e06cf9584e2a1a55f24661c59db6655cbda2b7354b79b35ffbb540321dee304] <==
	I1014 21:19:38.950709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:19:38.952786       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:19:38.952871       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:19:38.954657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:19:38.954739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1014 21:19:38.969728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:19:38.970071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:19:38.970262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:19:38.970398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:19:38.972518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1014 21:19:38.972756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:19:39.020861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:19:39.021099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:19:39.021273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 21:19:39.021321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:19:39.021359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:19:39.021502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:19:39.021540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:19:39.021579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:19:39.021617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:19:39.021679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:19:39.021958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:19:39.022048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:19:39.022073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1014 21:19:40.253868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118016    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd169342-2896-4b4e-b062-de3894754634-xtables-lock\") pod \"kube-proxy-c4vlq\" (UID: \"bd169342-2896-4b4e-b062-de3894754634\") " pod="kube-system/kube-proxy-c4vlq"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118062    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqlq\" (UniqueName: \"kubernetes.io/projected/bd169342-2896-4b4e-b062-de3894754634-kube-api-access-flqlq\") pod \"kube-proxy-c4vlq\" (UID: \"bd169342-2896-4b4e-b062-de3894754634\") " pod="kube-system/kube-proxy-c4vlq"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118083    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd169342-2896-4b4e-b062-de3894754634-lib-modules\") pod \"kube-proxy-c4vlq\" (UID: \"bd169342-2896-4b4e-b062-de3894754634\") " pod="kube-system/kube-proxy-c4vlq"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118103    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbcd8c52-d245-492d-b083-0b93dc854ffd-lib-modules\") pod \"kindnet-tps4w\" (UID: \"bbcd8c52-d245-492d-b083-0b93dc854ffd\") " pod="kube-system/kindnet-tps4w"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118309    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bbcd8c52-d245-492d-b083-0b93dc854ffd-cni-cfg\") pod \"kindnet-tps4w\" (UID: \"bbcd8c52-d245-492d-b083-0b93dc854ffd\") " pod="kube-system/kindnet-tps4w"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118335    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cgb\" (UniqueName: \"kubernetes.io/projected/bbcd8c52-d245-492d-b083-0b93dc854ffd-kube-api-access-w2cgb\") pod \"kindnet-tps4w\" (UID: \"bbcd8c52-d245-492d-b083-0b93dc854ffd\") " pod="kube-system/kindnet-tps4w"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118383    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbcd8c52-d245-492d-b083-0b93dc854ffd-xtables-lock\") pod \"kindnet-tps4w\" (UID: \"bbcd8c52-d245-492d-b083-0b93dc854ffd\") " pod="kube-system/kindnet-tps4w"
	Oct 14 21:19:47 no-preload-970287 kubelet[1994]: I1014 21:19:47.118444    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd169342-2896-4b4e-b062-de3894754634-kube-proxy\") pod \"kube-proxy-c4vlq\" (UID: \"bd169342-2896-4b4e-b062-de3894754634\") " pod="kube-system/kube-proxy-c4vlq"
	Oct 14 21:19:48 no-preload-970287 kubelet[1994]: E1014 21:19:48.220411    1994 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 14 21:19:48 no-preload-970287 kubelet[1994]: E1014 21:19:48.220552    1994 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd169342-2896-4b4e-b062-de3894754634-kube-proxy podName:bd169342-2896-4b4e-b062-de3894754634 nodeName:}" failed. No retries permitted until 2025-10-14 21:19:48.720507505 +0000 UTC m=+7.321067864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bd169342-2896-4b4e-b062-de3894754634-kube-proxy") pod "kube-proxy-c4vlq" (UID: "bd169342-2896-4b4e-b062-de3894754634") : failed to sync configmap cache: timed out waiting for the condition
	Oct 14 21:19:48 no-preload-970287 kubelet[1994]: I1014 21:19:48.337946    1994 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:19:48 no-preload-970287 kubelet[1994]: W1014 21:19:48.914162    1994 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-3b57741505a29cf28873c2594868dbe353558116267a23e15a86bbd97857d848 WatchSource:0}: Error finding container 3b57741505a29cf28873c2594868dbe353558116267a23e15a86bbd97857d848: Status 404 returned error can't find the container with id 3b57741505a29cf28873c2594868dbe353558116267a23e15a86bbd97857d848
	Oct 14 21:19:50 no-preload-970287 kubelet[1994]: I1014 21:19:50.148804    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c4vlq" podStartSLOduration=4.148782924 podStartE2EDuration="4.148782924s" podCreationTimestamp="2025-10-14 21:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:19:49.984580915 +0000 UTC m=+8.585141273" watchObservedRunningTime="2025-10-14 21:19:50.148782924 +0000 UTC m=+8.749343291"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.670760    1994 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.715823    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tps4w" podStartSLOduration=13.177596878 podStartE2EDuration="17.715799588s" podCreationTimestamp="2025-10-14 21:19:46 +0000 UTC" firstStartedPulling="2025-10-14 21:19:48.619799975 +0000 UTC m=+7.220360334" lastFinishedPulling="2025-10-14 21:19:53.158002685 +0000 UTC m=+11.758563044" observedRunningTime="2025-10-14 21:19:54.005579216 +0000 UTC m=+12.606139616" watchObservedRunningTime="2025-10-14 21:20:03.715799588 +0000 UTC m=+22.316359955"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.751455    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqqd\" (UniqueName: \"kubernetes.io/projected/70a4ed77-8432-4ad1-8ea5-28e6bb93b469-kube-api-access-8qqqd\") pod \"coredns-66bc5c9577-m627n\" (UID: \"70a4ed77-8432-4ad1-8ea5-28e6bb93b469\") " pod="kube-system/coredns-66bc5c9577-m627n"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.756582    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70a4ed77-8432-4ad1-8ea5-28e6bb93b469-config-volume\") pod \"coredns-66bc5c9577-m627n\" (UID: \"70a4ed77-8432-4ad1-8ea5-28e6bb93b469\") " pod="kube-system/coredns-66bc5c9577-m627n"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.857504    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmx2\" (UniqueName: \"kubernetes.io/projected/b9d1e209-23f4-4c75-b216-869ed982bec3-kube-api-access-bnmx2\") pod \"storage-provisioner\" (UID: \"b9d1e209-23f4-4c75-b216-869ed982bec3\") " pod="kube-system/storage-provisioner"
	Oct 14 21:20:03 no-preload-970287 kubelet[1994]: I1014 21:20:03.858125    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9d1e209-23f4-4c75-b216-869ed982bec3-tmp\") pod \"storage-provisioner\" (UID: \"b9d1e209-23f4-4c75-b216-869ed982bec3\") " pod="kube-system/storage-provisioner"
	Oct 14 21:20:04 no-preload-970287 kubelet[1994]: W1014 21:20:04.064782    1994 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-28b12b5f382a8d42382ecc697e77ce7a3e9a488ba6287850c7bf71cb50270343 WatchSource:0}: Error finding container 28b12b5f382a8d42382ecc697e77ce7a3e9a488ba6287850c7bf71cb50270343: Status 404 returned error can't find the container with id 28b12b5f382a8d42382ecc697e77ce7a3e9a488ba6287850c7bf71cb50270343
	Oct 14 21:20:04 no-preload-970287 kubelet[1994]: W1014 21:20:04.070403    1994 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-0e571659ad20619021527335eba4a8a2ca13b02380467a13be363a2a89f78bd7 WatchSource:0}: Error finding container 0e571659ad20619021527335eba4a8a2ca13b02380467a13be363a2a89f78bd7: Status 404 returned error can't find the container with id 0e571659ad20619021527335eba4a8a2ca13b02380467a13be363a2a89f78bd7
	Oct 14 21:20:05 no-preload-970287 kubelet[1994]: I1014 21:20:05.073939    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.07391848 podStartE2EDuration="16.07391848s" podCreationTimestamp="2025-10-14 21:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:20:05.052416997 +0000 UTC m=+23.652977372" watchObservedRunningTime="2025-10-14 21:20:05.07391848 +0000 UTC m=+23.674478839"
	Oct 14 21:20:07 no-preload-970287 kubelet[1994]: I1014 21:20:07.157887    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m627n" podStartSLOduration=20.157865202 podStartE2EDuration="20.157865202s" podCreationTimestamp="2025-10-14 21:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 21:20:05.079748976 +0000 UTC m=+23.680309367" watchObservedRunningTime="2025-10-14 21:20:07.157865202 +0000 UTC m=+25.758425561"
	Oct 14 21:20:07 no-preload-970287 kubelet[1994]: I1014 21:20:07.287172    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw7d2\" (UniqueName: \"kubernetes.io/projected/9b46c18b-db2a-4ed6-9a6a-03b201869a09-kube-api-access-rw7d2\") pod \"busybox\" (UID: \"9b46c18b-db2a-4ed6-9a6a-03b201869a09\") " pod="default/busybox"
	Oct 14 21:20:07 no-preload-970287 kubelet[1994]: W1014 21:20:07.511189    1994 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470 WatchSource:0}: Error finding container aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470: Status 404 returned error can't find the container with id aaf25210a48f060137a434f90a1ec49b887bf607397e11a8ddb6e81eae4b1470
	
	
	==> storage-provisioner [c5a6bc25bc24f2b9eb307d67be6887353defd0cd02aa5eae7693a0988e038763] <==
	I1014 21:20:04.223874       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:20:04.239358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:20:04.239485       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:20:04.242418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:04.249471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:20:04.250483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:20:04.254918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-970287_167d269d-aff9-4276-8d6b-f7bc2e38e867!
	I1014 21:20:04.261679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff2602c-0892-49a1-af19-280368cb1cc0", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-970287_167d269d-aff9-4276-8d6b-f7bc2e38e867 became leader
	W1014 21:20:04.264747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:04.277370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:20:04.355611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-970287_167d269d-aff9-4276-8d6b-f7bc2e38e867!
	W1014 21:20:06.285467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:06.292489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:08.295635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:08.303034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:10.306161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:10.312951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:12.316630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:12.322719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:14.326802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:14.333515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:16.339172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:16.344356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:18.352638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:20:18.358455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-970287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-718270 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-718270 --alsologtostderr -v=1: exit status 80 (2.542553373s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-718270 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:20:25.153191  756463 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:25.153319  756463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:25.153330  756463 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:25.153337  756463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:25.153592  756463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:25.153865  756463 out.go:368] Setting JSON to false
	I1014 21:20:25.153887  756463 mustload.go:65] Loading cluster: newest-cni-718270
	I1014 21:20:25.154305  756463 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:25.154784  756463 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:25.177649  756463 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:25.178020  756463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:25.239240  756463 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-14 21:20:25.229414822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:25.239968  756463 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-718270 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:20:25.243508  756463 out.go:179] * Pausing node newest-cni-718270 ... 
	I1014 21:20:25.247391  756463 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:25.247730  756463 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:25.247789  756463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:25.266082  756463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:25.371506  756463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:25.387408  756463 pause.go:52] kubelet running: true
	I1014 21:20:25.387474  756463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:20:25.651476  756463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:20:25.651569  756463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:20:25.724767  756463 cri.go:89] found id: "1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f"
	I1014 21:20:25.724791  756463 cri.go:89] found id: "dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142"
	I1014 21:20:25.724797  756463 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:25.724800  756463 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:25.724804  756463 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:25.724810  756463 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:25.724813  756463 cri.go:89] found id: ""
	I1014 21:20:25.724899  756463 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:20:25.735909  756463 retry.go:31] will retry after 180.026058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:25Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:25.916107  756463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:25.930662  756463 pause.go:52] kubelet running: false
	I1014 21:20:25.930733  756463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:20:26.103943  756463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:20:26.104037  756463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:20:26.199676  756463 cri.go:89] found id: "1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f"
	I1014 21:20:26.199701  756463 cri.go:89] found id: "dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142"
	I1014 21:20:26.199707  756463 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:26.199712  756463 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:26.199715  756463 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:26.199719  756463 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:26.199723  756463 cri.go:89] found id: ""
	I1014 21:20:26.199772  756463 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:20:26.212255  756463 retry.go:31] will retry after 550.106291ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:26Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:26.762999  756463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:26.776187  756463 pause.go:52] kubelet running: false
	I1014 21:20:26.776252  756463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:20:26.933241  756463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:20:26.933320  756463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:20:26.999484  756463 cri.go:89] found id: "1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f"
	I1014 21:20:26.999558  756463 cri.go:89] found id: "dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142"
	I1014 21:20:26.999578  756463 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:26.999598  756463 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:26.999633  756463 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:26.999655  756463 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:26.999673  756463 cri.go:89] found id: ""
	I1014 21:20:26.999775  756463 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:20:27.015676  756463 retry.go:31] will retry after 346.350498ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:27Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:27.362306  756463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:27.376017  756463 pause.go:52] kubelet running: false
	I1014 21:20:27.376126  756463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:20:27.523930  756463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:20:27.524037  756463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:20:27.595267  756463 cri.go:89] found id: "1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f"
	I1014 21:20:27.595289  756463 cri.go:89] found id: "dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142"
	I1014 21:20:27.595295  756463 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:27.595299  756463 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:27.595312  756463 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:27.595317  756463 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:27.595320  756463 cri.go:89] found id: ""
	I1014 21:20:27.595369  756463 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:20:27.610157  756463 out.go:203] 
	W1014 21:20:27.613127  756463 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:20:27.613192  756463 out.go:285] * 
	* 
	W1014 21:20:27.621217  756463 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:20:27.624048  756463 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-718270 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718270
helpers_test.go:243: (dbg) docker inspect newest-cni-718270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	        "Created": "2025-10-14T21:19:22.499110218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:20:07.439090943Z",
	            "FinishedAt": "2025-10-14T21:20:06.259914795Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hostname",
	        "HostsPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hosts",
	        "LogPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370-json.log",
	        "Name": "/newest-cni-718270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	                "LowerDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718270",
	                "Source": "/var/lib/docker/volumes/newest-cni-718270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718270",
	                "name.minikube.sigs.k8s.io": "newest-cni-718270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a3e6bb2de3cc2607658ae1ebe93c1069f41dcc35520abcc5edf7424198344a3",
	            "SandboxKey": "/var/run/docker/netns/2a3e6bb2de3c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:de:ee:08:1d:f7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18b1986d1416397d99b38ffdc32e9f008860c1b13740e1977cc68be30c32f62d",
	                    "EndpointID": "e78a049a8b6a41d4c54ee787bd83748611816e6b95d5029e065f12195bf3523e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718270",
	                        "9615898dd977"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270: exit status 2 (344.317627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25: (1.075705107s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p newest-cni-718270 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p no-preload-970287 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ image   │ newest-cni-718270 image list --format=json                                                                                                                                                                                                    │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ pause   │ -p newest-cni-718270 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:20:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:20:07.134354  754019 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:07.134685  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.134703  754019 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:07.134713  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.135093  754019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:07.135610  754019 out.go:368] Setting JSON to false
	I1014 21:20:07.136773  754019 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14558,"bootTime":1760462250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:20:07.136855  754019 start.go:141] virtualization:  
	I1014 21:20:07.140373  754019 out.go:179] * [newest-cni-718270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:20:07.144963  754019 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:20:07.145035  754019 notify.go:220] Checking for updates...
	I1014 21:20:07.156463  754019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:20:07.162126  754019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:07.166245  754019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:20:07.169573  754019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:20:07.172540  754019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:20:07.176080  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:07.176799  754019 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:20:07.207727  754019 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:20:07.207914  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.272227  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.262374652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.272432  754019 docker.go:318] overlay module found
	I1014 21:20:07.275576  754019 out.go:179] * Using the docker driver based on existing profile
	I1014 21:20:07.278402  754019 start.go:305] selected driver: docker
	I1014 21:20:07.278421  754019 start.go:925] validating driver "docker" against &{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.278544  754019 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:20:07.279261  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.335404  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.326146898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.335753  754019 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:07.335797  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:07.335873  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:07.335916  754019 start.go:349] cluster config:
	{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.339032  754019 out.go:179] * Starting "newest-cni-718270" primary control-plane node in "newest-cni-718270" cluster
	I1014 21:20:07.341800  754019 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:20:07.344678  754019 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:20:07.347442  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:07.347502  754019 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:20:07.347516  754019 cache.go:58] Caching tarball of preloaded images
	I1014 21:20:07.347514  754019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:20:07.347606  754019 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:20:07.347617  754019 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:20:07.347741  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.366876  754019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:20:07.366900  754019 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:20:07.366922  754019 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:20:07.366947  754019 start.go:360] acquireMachinesLock for newest-cni-718270: {Name:mk4b94f993721098e6e50976108704497d217d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:20:07.367004  754019 start.go:364] duration metric: took 34.446µs to acquireMachinesLock for "newest-cni-718270"
	I1014 21:20:07.367027  754019 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:20:07.367044  754019 fix.go:54] fixHost starting: 
	I1014 21:20:07.367295  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.403346  754019 fix.go:112] recreateIfNeeded on newest-cni-718270: state=Stopped err=<nil>
	W1014 21:20:07.403386  754019 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:20:07.406806  754019 out.go:252] * Restarting existing docker container for "newest-cni-718270" ...
	I1014 21:20:07.406910  754019 cli_runner.go:164] Run: docker start newest-cni-718270
	I1014 21:20:07.710141  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.732519  754019 kic.go:430] container "newest-cni-718270" state is running.
	I1014 21:20:07.732946  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:07.757790  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.758026  754019 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:07.758090  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:07.780260  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:07.780635  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:07.780652  754019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:07.782615  754019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 21:20:10.928440  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:10.928514  754019 ubuntu.go:182] provisioning hostname "newest-cni-718270"
	I1014 21:20:10.928598  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:10.946074  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:10.946408  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:10.946425  754019 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718270 && echo "newest-cni-718270" | sudo tee /etc/hostname
	I1014 21:20:11.111671  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:11.111755  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:11.132960  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:11.133268  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:11.133292  754019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:11.276674  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:11.276703  754019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:11.276733  754019 ubuntu.go:190] setting up certificates
	I1014 21:20:11.276743  754019 provision.go:84] configureAuth start
	I1014 21:20:11.276806  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:11.294451  754019 provision.go:143] copyHostCerts
	I1014 21:20:11.294580  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:11.294605  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:11.294689  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:11.294795  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:11.294807  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:11.294834  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:11.294892  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:11.294901  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:11.294924  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:11.294973  754019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718270 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718270]
	I1014 21:20:12.196847  754019 provision.go:177] copyRemoteCerts
	I1014 21:20:12.196918  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:12.196958  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.216799  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.322441  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:12.340555  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:12.358874  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:20:12.376881  754019 provision.go:87] duration metric: took 1.100112396s to configureAuth
	I1014 21:20:12.376904  754019 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:12.377093  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:12.377212  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.394315  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:12.394623  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:12.394643  754019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:12.701850  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:12.701872  754019 machine.go:96] duration metric: took 4.943837245s to provisionDockerMachine
	I1014 21:20:12.701883  754019 start.go:293] postStartSetup for "newest-cni-718270" (driver="docker")
	I1014 21:20:12.701893  754019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:12.701954  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:12.702008  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.720190  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.824501  754019 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:12.828073  754019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:12.828116  754019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:12.828128  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:12.828183  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:12.828269  754019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:12.828398  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:12.836051  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:12.855105  754019 start.go:296] duration metric: took 153.205972ms for postStartSetup
	I1014 21:20:12.855219  754019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:12.855273  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.874608  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.979148  754019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:12.983745  754019 fix.go:56] duration metric: took 5.616700881s for fixHost
	I1014 21:20:12.983772  754019 start.go:83] releasing machines lock for "newest-cni-718270", held for 5.616755946s
	I1014 21:20:12.983843  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:13.000549  754019 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:13.000620  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.000872  754019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:13.000930  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.020193  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.028428  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.120200  754019 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:13.210756  754019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:13.247270  754019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:13.251931  754019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:13.252004  754019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:13.260216  754019 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:20:13.260291  754019 start.go:495] detecting cgroup driver to use...
	I1014 21:20:13.260416  754019 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:13.260469  754019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:13.275982  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:13.288957  754019 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:13.289022  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:13.305079  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:13.319186  754019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:13.441089  754019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:13.579697  754019 docker.go:234] disabling docker service ...
	I1014 21:20:13.579808  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:13.596746  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:13.611616  754019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:13.737848  754019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:13.856259  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:13.871464  754019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:13.887612  754019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:13.887681  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.898094  754019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:13.898174  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.908629  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.918430  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.928660  754019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:13.937754  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.947219  754019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.956774  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.967169  754019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:13.975143  754019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:13.982785  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.116745  754019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:14.263711  754019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:14.263858  754019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:14.268618  754019 start.go:563] Will wait 60s for crictl version
	I1014 21:20:14.268687  754019 ssh_runner.go:195] Run: which crictl
	I1014 21:20:14.272565  754019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:14.298294  754019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:14.298466  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.330916  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.366083  754019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:14.368727  754019 cli_runner.go:164] Run: docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:14.390570  754019 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:14.394557  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.406994  754019 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 21:20:14.409789  754019 kubeadm.go:883] updating cluster {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:14.409934  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:14.410012  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.445834  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.445860  754019 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:20:14.445916  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.471310  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.471336  754019 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:14.471344  754019 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:14.471438  754019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:14.471523  754019 ssh_runner.go:195] Run: crio config
	I1014 21:20:14.526158  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:14.526190  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:14.526212  754019 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1014 21:20:14.526235  754019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718270 NodeName:newest-cni-718270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:14.526466  754019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:14.526556  754019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:14.533974  754019 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:14.534061  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:14.541524  754019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:20:14.554814  754019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:14.567474  754019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1014 21:20:14.581058  754019 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:14.584758  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.594932  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.716570  754019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:14.734870  754019 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270 for IP: 192.168.85.2
	I1014 21:20:14.734892  754019 certs.go:195] generating shared ca certs ...
	I1014 21:20:14.734909  754019 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:14.735067  754019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:14.735116  754019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:14.735129  754019 certs.go:257] generating profile certs ...
	I1014 21:20:14.735217  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key
	I1014 21:20:14.735293  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141
	I1014 21:20:14.735335  754019 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key
	I1014 21:20:14.735462  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:14.735498  754019 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:14.735509  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:14.735533  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:14.735558  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:14.735586  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:14.735637  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:14.736438  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:14.758409  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:14.776281  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:14.793679  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:14.811490  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:20:14.837714  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:20:14.864140  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:14.889890  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:20:14.920159  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:14.947308  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:14.970838  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:14.989166  754019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:15.004057  754019 ssh_runner.go:195] Run: openssl version
	I1014 21:20:15.015587  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:15.027423  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034441  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034578  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.078626  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:15.088718  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:15.098290  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102477  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102592  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.144904  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:15.153585  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:15.162311  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166491  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166581  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.216090  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:15.224652  754019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:15.230727  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:20:15.280805  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:20:15.338649  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:20:15.409894  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:20:15.484821  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:20:15.556404  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:20:15.677557  754019 kubeadm.go:400] StartCluster: {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:15.677653  754019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:15.677725  754019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:15.774849  754019 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:15.774872  754019 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:15.774877  754019 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:15.774881  754019 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:15.774884  754019 cri.go:89] found id: ""
	I1014 21:20:15.774940  754019 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:20:15.797126  754019 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:15Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:15.797207  754019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:15.827155  754019 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:20:15.827177  754019 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:20:15.827227  754019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:20:15.843285  754019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:20:15.843857  754019 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718270" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.844090  754019 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718270" cluster setting kubeconfig missing "newest-cni-718270" context setting]
	I1014 21:20:15.844829  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.848965  754019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:20:15.885698  754019 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 21:20:15.885729  754019 kubeadm.go:601] duration metric: took 58.544655ms to restartPrimaryControlPlane
	I1014 21:20:15.885737  754019 kubeadm.go:402] duration metric: took 208.198673ms to StartCluster
	I1014 21:20:15.885751  754019 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.885816  754019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.886774  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.886985  754019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:15.887395  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:15.887445  754019 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:15.887511  754019 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718270"
	I1014 21:20:15.887525  754019 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718270"
	W1014 21:20:15.887531  754019 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:20:15.887551  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.888069  754019 addons.go:69] Setting dashboard=true in profile "newest-cni-718270"
	I1014 21:20:15.888087  754019 addons.go:238] Setting addon dashboard=true in "newest-cni-718270"
	W1014 21:20:15.888093  754019 addons.go:247] addon dashboard should already be in state true
	I1014 21:20:15.888122  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.891768  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.892157  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.899140  754019 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718270"
	I1014 21:20:15.899179  754019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718270"
	I1014 21:20:15.899515  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.900224  754019 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:15.905445  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:16.011967  754019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:16.012056  754019 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:20:16.015069  754019 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:16.015094  754019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:16.015157  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.019125  754019 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:20:16.024059  754019 addons.go:238] Setting addon default-storageclass=true in "newest-cni-718270"
	W1014 21:20:16.024080  754019 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:20:16.024131  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:16.024818  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:16.024968  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:20:16.024979  754019 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:20:16.025027  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.080715  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.136708  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.144655  754019 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:16.144676  754019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:20:16.144737  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.197439  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.481649  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:20:16.481670  754019 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:20:16.502619  754019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:16.514590  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:16.549423  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:16.583623  754019 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:20:16.583698  754019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:20:16.590156  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:20:16.590180  754019 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:20:16.682575  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:20:16.682616  754019 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:20:16.819942  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:20:16.819963  754019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:20:16.902917  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:20:16.902939  754019 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:20:16.940597  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:20:16.940620  754019 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:20:16.973687  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:20:16.973706  754019 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:20:16.993947  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:20:16.993967  754019 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:20:17.013338  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:17.013359  754019 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:20:17.029955  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:23.774881  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.260206047s)
	I1014 21:20:23.774951  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.225507569s)
	I1014 21:20:23.775284  754019 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.191572422s)
	I1014 21:20:23.775309  754019 api_server.go:72] duration metric: took 7.888282904s to wait for apiserver process to appear ...
	I1014 21:20:23.775316  754019 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:20:23.775333  754019 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:20:23.775631  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.745646637s)
	I1014 21:20:23.779064  754019 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-718270 addons enable metrics-server
	
	I1014 21:20:23.797987  754019 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:20:23.798020  754019 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:20:23.816514  754019 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1014 21:20:23.819414  754019 addons.go:514] duration metric: took 7.931959968s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1014 21:20:24.276076  754019 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:20:24.307874  754019 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:20:24.309041  754019 api_server.go:141] control plane version: v1.34.1
	I1014 21:20:24.309072  754019 api_server.go:131] duration metric: took 533.748771ms to wait for apiserver health ...
	I1014 21:20:24.309082  754019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:20:24.312890  754019 system_pods.go:59] 8 kube-system pods found
	I1014 21:20:24.312933  754019 system_pods.go:61] "coredns-66bc5c9577-bdhlq" [b96a6307-6a92-458c-8cbf-c5c102940d5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:24.312942  754019 system_pods.go:61] "etcd-newest-cni-718270" [f355740c-adf7-429e-a1da-7ce4543c4d04] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:24.312952  754019 system_pods.go:61] "kindnet-vdqbd" [5e5738b0-840c-4264-b49e-f9e41a70647e] Running
	I1014 21:20:24.312960  754019 system_pods.go:61] "kube-apiserver-newest-cni-718270" [d2e80a6a-1be9-4d68-8c1f-ca465eeb82ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:24.312973  754019 system_pods.go:61] "kube-controller-manager-newest-cni-718270" [66df72c2-6625-4150-bedb-4fbf0594354f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:24.312978  754019 system_pods.go:61] "kube-proxy-97kps" [c9808191-fd7e-4064-a7a4-f8d026004ef8] Running
	I1014 21:20:24.312991  754019 system_pods.go:61] "kube-scheduler-newest-cni-718270" [815541f5-8b42-4b45-b84d-3c1d72b0060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:24.312996  754019 system_pods.go:61] "storage-provisioner" [f93166f5-1e47-44fd-b4b9-95957da83455] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:24.313005  754019 system_pods.go:74] duration metric: took 3.915452ms to wait for pod list to return data ...
	I1014 21:20:24.313019  754019 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:20:24.316689  754019 default_sa.go:45] found service account: "default"
	I1014 21:20:24.316715  754019 default_sa.go:55] duration metric: took 3.689216ms for default service account to be created ...
	I1014 21:20:24.316728  754019 kubeadm.go:586] duration metric: took 8.429700271s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:24.316747  754019 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:20:24.321611  754019 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:20:24.321644  754019 node_conditions.go:123] node cpu capacity is 2
	I1014 21:20:24.321658  754019 node_conditions.go:105] duration metric: took 4.905061ms to run NodePressure ...
	I1014 21:20:24.321670  754019 start.go:241] waiting for startup goroutines ...
	I1014 21:20:24.321677  754019 start.go:246] waiting for cluster config update ...
	I1014 21:20:24.321706  754019 start.go:255] writing updated cluster config ...
	I1014 21:20:24.322007  754019 ssh_runner.go:195] Run: rm -f paused
	I1014 21:20:24.404075  754019 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:20:24.407291  754019 out.go:179] * Done! kubectl is now configured to use "newest-cni-718270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.200415808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.20795709Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=52716703-74e8-468c-8e43-5b589ceadc2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.228561648Z" level=info msg="Ran pod sandbox 3e9d16463311a2b2b031da250b23b12dcfca55d76b664780a2089e97e39b150a with infra container: kube-system/kindnet-vdqbd/POD" id=52716703-74e8-468c-8e43-5b589ceadc2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.229754805Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-97kps/POD" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.23182912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.233213567Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ce764dc4-b55c-4fac-8b05-0587785860d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.237224439Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.240988299Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d118b9e8-83d6-4cde-a7a4-81860ab60a61 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.244788475Z" level=info msg="Creating container: kube-system/kindnet-vdqbd/kindnet-cni" id=7f812a76-399f-4a84-b5a0-6e28bb21d365 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.245177103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.249653748Z" level=info msg="Ran pod sandbox 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c with infra container: kube-system/kube-proxy-97kps/POD" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.261403577Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ad58d34d-600d-42be-b24c-0fef412732b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.262540208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.264205784Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dfaf7952-f5d2-4a27-85ac-09f17cc72f98 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.265362592Z" level=info msg="Creating container: kube-system/kube-proxy-97kps/kube-proxy" id=44496353-daaa-402f-9622-ab27b7c002e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.265682311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.267164237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.281954095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.282817514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.324256246Z" level=info msg="Created container dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142: kube-system/kindnet-vdqbd/kindnet-cni" id=7f812a76-399f-4a84-b5a0-6e28bb21d365 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.325171071Z" level=info msg="Starting container: dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142" id=cbffa4b2-95c9-4e93-8010-f03a8acf3d4c name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.328663582Z" level=info msg="Started container" PID=1061 containerID=dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142 description=kube-system/kindnet-vdqbd/kindnet-cni id=cbffa4b2-95c9-4e93-8010-f03a8acf3d4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e9d16463311a2b2b031da250b23b12dcfca55d76b664780a2089e97e39b150a
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.367223495Z" level=info msg="Created container 1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f: kube-system/kube-proxy-97kps/kube-proxy" id=44496353-daaa-402f-9622-ab27b7c002e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.368227872Z" level=info msg="Starting container: 1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f" id=6658575a-829f-41de-8bee-76a129671bf9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.37744155Z" level=info msg="Started container" PID=1062 containerID=1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f description=kube-system/kube-proxy-97kps/kube-proxy id=6658575a-829f-41de-8bee-76a129671bf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1237eb5a0ce86       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   10a21d5b021e7       kube-proxy-97kps                            kube-system
	dbd481283f0a1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   3e9d16463311a       kindnet-vdqbd                               kube-system
	246522f174416       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   316b34ab42554       etcd-newest-cni-718270                      kube-system
	6e970e1d72789       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   d04ba1eef6bc7       kube-apiserver-newest-cni-718270            kube-system
	d1ce5d2ce7305       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   dbba3576276f3       kube-controller-manager-newest-cni-718270   kube-system
	f9f8c05de0666       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   a7b6f5c6ea3d3       kube-scheduler-newest-cni-718270            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=newest-cni-718270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718270
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 15dd3ba01c1c42e29ade3f0e46280220
	  System UUID:                a5834ddd-d2a7-4491-bd6d-465132e0fdf2
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718270                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-vdqbd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-718270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-718270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-97kps                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-718270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-718270 event: Registered Node newest-cni-718270 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x2 over 14s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x2 over 14s)  kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x2 over 14s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-718270 event: Registered Node newest-cni-718270 in Controller
	
	
	==> dmesg <==
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	[Oct14 21:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f] <==
	{"level":"warn","ts":"2025-10-14T21:20:19.981969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.010184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.075322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.091740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.108562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.134562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.173784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.198549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.222537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.250999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.312629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.337155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.373099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.409059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.434280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.477047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.521888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.585392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.631694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.701671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.725079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.766227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.794163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.959182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33724","server-name":"","error":"EOF"}
	2025/10/14 21:20:25 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 21:20:28 up  4:02,  0 user,  load average: 5.05, 3.32, 2.32
	Linux newest-cni-718270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142] <==
	I1014 21:20:23.488935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:20:23.492033       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:20:23.495216       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:20:23.495305       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:20:23.495344       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:20:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:20:23.708076       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:20:23.708282       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:20:23.708393       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:20:23.710342       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c] <==
	I1014 21:20:22.529900       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:20:22.529996       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 21:20:22.530121       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:20:22.530170       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:20:22.542739       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:20:22.556075       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1014 21:20:22.608136       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:20:22.975218       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:20:22.993501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:20:23.129369       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:20:23.318885       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:20:23.426240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:20:23.458252       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:20:23.649256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.172.115"}
	I1014 21:20:23.723891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.219.188"}
	{"level":"warn","ts":"2025-10-14T21:20:25.601626Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013bb4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1014 21:20:25.601993       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.602163       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.602073       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.928µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 21:20:25.603343       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.603510       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.735974ms" method="PATCH" path="/api/v1/namespaces/default/events/newest-cni-718270.186e7855f5e2fd55" result=null
	I1014 21:20:25.824349       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:20:25.873669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:20:25.927108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:20:25.976256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76] <==
	I1014 21:20:25.426076       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:20:25.427860       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:20:25.431523       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:20:25.431661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:25.431709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:20:25.431836       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:20:25.431936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-718270"
	I1014 21:20:25.432005       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 21:20:25.434279       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:20:25.467285       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:20:25.467341       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:20:25.469570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:20:25.471914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:20:25.472517       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:20:25.476420       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:20:25.476478       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:20:25.476505       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:20:25.476541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:20:25.476552       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:20:25.476558       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:20:25.478922       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:20:25.488098       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:25.488293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:25.490152       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:20:25.490243       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f] <==
	I1014 21:20:23.787760       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:20:23.968147       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:20:24.270939       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:20:24.272283       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:20:24.280421       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:20:24.317878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:20:24.317975       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:20:24.325910       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:20:24.326293       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:20:24.326643       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:24.327934       1 config.go:200] "Starting service config controller"
	I1014 21:20:24.327993       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:20:24.328036       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:20:24.328087       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:20:24.328125       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:20:24.328179       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:20:24.329395       1 config.go:309] "Starting node config controller"
	I1014 21:20:24.329468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:20:24.330773       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:20:24.444031       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:20:24.444063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:20:24.444071       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130] <==
	I1014 21:20:19.884364       1 serving.go:386] Generated self-signed cert in-memory
	I1014 21:20:24.248959       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:20:24.248992       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:24.256243       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:20:24.256458       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 21:20:24.256515       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 21:20:24.256567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:20:24.257785       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.257858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.257909       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:20:24.259569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:20:24.357751       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 21:20:24.357970       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.359801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:20:21 newest-cni-718270 kubelet[728]: E1014 21:20:21.291088     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-718270\" not found" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.073875     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.620646     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718270\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.620682     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.644752     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718270\" already exists" pod="kube-system/kube-scheduler-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644801     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644811     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644896     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644922     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.645993     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.686633     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-718270\" already exists" pod="kube-system/etcd-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.686671     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.720547     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718270\" already exists" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.852405     728 apiserver.go:52] "Watching apiserver"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.880780     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965869     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-xtables-lock\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965930     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-xtables-lock\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965962     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-lib-modules\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965994     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-lib-modules\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.966028     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-cni-cfg\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.997962     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:20:23 newest-cni-718270 kubelet[728]: W1014 21:20:23.244299     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/crio-10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c WatchSource:0}: Error finding container 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c: Status 404 returned error can't find the container with id 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718270 -n newest-cni-718270: exit status 2 (359.165958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9: exit status 1 (88.01437ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bdhlq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-47b48" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-r7dw9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718270
helpers_test.go:243: (dbg) docker inspect newest-cni-718270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	        "Created": "2025-10-14T21:19:22.499110218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:20:07.439090943Z",
	            "FinishedAt": "2025-10-14T21:20:06.259914795Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hostname",
	        "HostsPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/hosts",
	        "LogPath": "/var/lib/docker/containers/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370-json.log",
	        "Name": "/newest-cni-718270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370",
	                "LowerDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c30d9867a2116971c960b229284f00bed5d0fdc18c4af19422b4ddc6381f327/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718270",
	                "Source": "/var/lib/docker/volumes/newest-cni-718270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718270",
	                "name.minikube.sigs.k8s.io": "newest-cni-718270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a3e6bb2de3cc2607658ae1ebe93c1069f41dcc35520abcc5edf7424198344a3",
	            "SandboxKey": "/var/run/docker/netns/2a3e6bb2de3c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:de:ee:08:1d:f7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18b1986d1416397d99b38ffdc32e9f008860c1b13740e1977cc68be30c32f62d",
	                    "EndpointID": "e78a049a8b6a41d4c54ee787bd83748611816e6b95d5029e065f12195bf3523e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718270",
	                        "9615898dd977"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270: exit status 2 (356.691068ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718270 logs -n 25: (1.183721283s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-650951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │                     │
	│ stop    │ -p embed-certs-650951 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:17 UTC │
	│ start   │ -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:17 UTC │ 14 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-319221 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p newest-cni-718270 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p no-preload-970287 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ image   │ newest-cni-718270 image list --format=json                                                                                                                                                                                                    │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ pause   │ -p newest-cni-718270 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:20:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:20:07.134354  754019 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:07.134685  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.134703  754019 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:07.134713  754019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:07.135093  754019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:07.135610  754019 out.go:368] Setting JSON to false
	I1014 21:20:07.136773  754019 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14558,"bootTime":1760462250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:20:07.136855  754019 start.go:141] virtualization:  
	I1014 21:20:07.140373  754019 out.go:179] * [newest-cni-718270] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:20:07.144963  754019 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:20:07.145035  754019 notify.go:220] Checking for updates...
	I1014 21:20:07.156463  754019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:20:07.162126  754019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:07.166245  754019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:20:07.169573  754019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:20:07.172540  754019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:20:07.176080  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:07.176799  754019 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:20:07.207727  754019 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:20:07.207914  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.272227  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.262374652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.272432  754019 docker.go:318] overlay module found
	I1014 21:20:07.275576  754019 out.go:179] * Using the docker driver based on existing profile
	I1014 21:20:07.278402  754019 start.go:305] selected driver: docker
	I1014 21:20:07.278421  754019 start.go:925] validating driver "docker" against &{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.278544  754019 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:20:07.279261  754019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:07.335404  754019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:20:07.326146898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:07.335753  754019 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:07.335797  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:07.335873  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:07.335916  754019 start.go:349] cluster config:
	{Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:07.339032  754019 out.go:179] * Starting "newest-cni-718270" primary control-plane node in "newest-cni-718270" cluster
	I1014 21:20:07.341800  754019 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:20:07.344678  754019 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:20:07.347442  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:07.347502  754019 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:20:07.347516  754019 cache.go:58] Caching tarball of preloaded images
	I1014 21:20:07.347514  754019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:20:07.347606  754019 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:20:07.347617  754019 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:20:07.347741  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.366876  754019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:20:07.366900  754019 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:20:07.366922  754019 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:20:07.366947  754019 start.go:360] acquireMachinesLock for newest-cni-718270: {Name:mk4b94f993721098e6e50976108704497d217d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:20:07.367004  754019 start.go:364] duration metric: took 34.446µs to acquireMachinesLock for "newest-cni-718270"
	I1014 21:20:07.367027  754019 start.go:96] Skipping create...Using existing machine configuration
	I1014 21:20:07.367044  754019 fix.go:54] fixHost starting: 
	I1014 21:20:07.367295  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.403346  754019 fix.go:112] recreateIfNeeded on newest-cni-718270: state=Stopped err=<nil>
	W1014 21:20:07.403386  754019 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 21:20:07.406806  754019 out.go:252] * Restarting existing docker container for "newest-cni-718270" ...
	I1014 21:20:07.406910  754019 cli_runner.go:164] Run: docker start newest-cni-718270
	I1014 21:20:07.710141  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:07.732519  754019 kic.go:430] container "newest-cni-718270" state is running.
	I1014 21:20:07.732946  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:07.757790  754019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/config.json ...
	I1014 21:20:07.758026  754019 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:07.758090  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:07.780260  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:07.780635  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:07.780652  754019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:07.782615  754019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 21:20:10.928440  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:10.928514  754019 ubuntu.go:182] provisioning hostname "newest-cni-718270"
	I1014 21:20:10.928598  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:10.946074  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:10.946408  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:10.946425  754019 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718270 && echo "newest-cni-718270" | sudo tee /etc/hostname
	I1014 21:20:11.111671  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718270
	
	I1014 21:20:11.111755  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:11.132960  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:11.133268  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:11.133292  754019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:11.276674  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:11.276703  754019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:11.276733  754019 ubuntu.go:190] setting up certificates
	I1014 21:20:11.276743  754019 provision.go:84] configureAuth start
	I1014 21:20:11.276806  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:11.294451  754019 provision.go:143] copyHostCerts
	I1014 21:20:11.294580  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:11.294605  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:11.294689  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:11.294795  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:11.294807  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:11.294834  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:11.294892  754019 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:11.294901  754019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:11.294924  754019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:11.294973  754019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718270 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718270]
	I1014 21:20:12.196847  754019 provision.go:177] copyRemoteCerts
	I1014 21:20:12.196918  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:12.196958  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.216799  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.322441  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:12.340555  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:12.358874  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:20:12.376881  754019 provision.go:87] duration metric: took 1.100112396s to configureAuth
	I1014 21:20:12.376904  754019 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:12.377093  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:12.377212  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.394315  754019 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:12.394623  754019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1014 21:20:12.394643  754019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:12.701850  754019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:12.701872  754019 machine.go:96] duration metric: took 4.943837245s to provisionDockerMachine
	I1014 21:20:12.701883  754019 start.go:293] postStartSetup for "newest-cni-718270" (driver="docker")
	I1014 21:20:12.701893  754019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:12.701954  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:12.702008  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.720190  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.824501  754019 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:12.828073  754019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:12.828116  754019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:12.828128  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:12.828183  754019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:12.828269  754019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:12.828398  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:12.836051  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:12.855105  754019 start.go:296] duration metric: took 153.205972ms for postStartSetup
	I1014 21:20:12.855219  754019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:12.855273  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:12.874608  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:12.979148  754019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:12.983745  754019 fix.go:56] duration metric: took 5.616700881s for fixHost
	I1014 21:20:12.983772  754019 start.go:83] releasing machines lock for "newest-cni-718270", held for 5.616755946s
	I1014 21:20:12.983843  754019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718270
	I1014 21:20:13.000549  754019 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:13.000620  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.000872  754019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:13.000930  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:13.020193  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.028428  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:13.120200  754019 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:13.210756  754019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:13.247270  754019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:13.251931  754019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:13.252004  754019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:13.260216  754019 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:20:13.260291  754019 start.go:495] detecting cgroup driver to use...
	I1014 21:20:13.260416  754019 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:13.260469  754019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:13.275982  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:13.288957  754019 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:13.289022  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:13.305079  754019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:13.319186  754019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:13.441089  754019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:13.579697  754019 docker.go:234] disabling docker service ...
	I1014 21:20:13.579808  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:13.596746  754019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:13.611616  754019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:13.737848  754019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:13.856259  754019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:13.871464  754019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:13.887612  754019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:13.887681  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.898094  754019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:13.898174  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.908629  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.918430  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.928660  754019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:13.937754  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.947219  754019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.956774  754019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:13.967169  754019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:13.975143  754019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:13.982785  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.116745  754019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:14.263711  754019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:14.263858  754019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:14.268618  754019 start.go:563] Will wait 60s for crictl version
	I1014 21:20:14.268687  754019 ssh_runner.go:195] Run: which crictl
	I1014 21:20:14.272565  754019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:14.298294  754019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:14.298466  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.330916  754019 ssh_runner.go:195] Run: crio --version
	I1014 21:20:14.366083  754019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:14.368727  754019 cli_runner.go:164] Run: docker network inspect newest-cni-718270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:14.390570  754019 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:14.394557  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.406994  754019 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 21:20:14.409789  754019 kubeadm.go:883] updating cluster {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:14.409934  754019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:14.410012  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.445834  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.445860  754019 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:20:14.445916  754019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:14.471310  754019 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:14.471336  754019 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:14.471344  754019 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:14.471438  754019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:14.471523  754019 ssh_runner.go:195] Run: crio config
	I1014 21:20:14.526158  754019 cni.go:84] Creating CNI manager for ""
	I1014 21:20:14.526190  754019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:14.526212  754019 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1014 21:20:14.526235  754019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718270 NodeName:newest-cni-718270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:14.526466  754019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:14.526556  754019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:14.533974  754019 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:14.534061  754019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:14.541524  754019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:20:14.554814  754019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:14.567474  754019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1014 21:20:14.581058  754019 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:14.584758  754019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:14.594932  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:14.716570  754019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:14.734870  754019 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270 for IP: 192.168.85.2
	I1014 21:20:14.734892  754019 certs.go:195] generating shared ca certs ...
	I1014 21:20:14.734909  754019 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:14.735067  754019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:14.735116  754019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:14.735129  754019 certs.go:257] generating profile certs ...
	I1014 21:20:14.735217  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/client.key
	I1014 21:20:14.735293  754019 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key.df573141
	I1014 21:20:14.735335  754019 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key
	I1014 21:20:14.735462  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:14.735498  754019 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:14.735509  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:14.735533  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:14.735558  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:14.735586  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:14.735637  754019 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:14.736438  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:14.758409  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:14.776281  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:14.793679  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:14.811490  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:20:14.837714  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 21:20:14.864140  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:14.889890  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/newest-cni-718270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:20:14.920159  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:14.947308  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:14.970838  754019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:14.989166  754019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:15.004057  754019 ssh_runner.go:195] Run: openssl version
	I1014 21:20:15.015587  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:15.027423  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034441  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.034578  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:15.078626  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:15.088718  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:15.098290  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102477  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.102592  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:15.144904  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:15.153585  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:15.162311  754019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166491  754019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.166581  754019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:15.216090  754019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:15.224652  754019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:15.230727  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:20:15.280805  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:20:15.338649  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:20:15.409894  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:20:15.484821  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:20:15.556404  754019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:20:15.677557  754019 kubeadm.go:400] StartCluster: {Name:newest-cni-718270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:15.677653  754019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:15.677725  754019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:15.774849  754019 cri.go:89] found id: "246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f"
	I1014 21:20:15.774872  754019 cri.go:89] found id: "6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c"
	I1014 21:20:15.774877  754019 cri.go:89] found id: "d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76"
	I1014 21:20:15.774881  754019 cri.go:89] found id: "f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130"
	I1014 21:20:15.774884  754019 cri.go:89] found id: ""
	I1014 21:20:15.774940  754019 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:20:15.797126  754019 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:15Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:15.797207  754019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:15.827155  754019 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:20:15.827177  754019 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:20:15.827227  754019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:20:15.843285  754019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:20:15.843857  754019 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718270" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.844090  754019 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718270" cluster setting kubeconfig missing "newest-cni-718270" context setting]
	I1014 21:20:15.844829  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.848965  754019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:20:15.885698  754019 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 21:20:15.885729  754019 kubeadm.go:601] duration metric: took 58.544655ms to restartPrimaryControlPlane
	I1014 21:20:15.885737  754019 kubeadm.go:402] duration metric: took 208.198673ms to StartCluster
	I1014 21:20:15.885751  754019 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.885816  754019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:15.886774  754019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:15.886985  754019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:15.887395  754019 config.go:182] Loaded profile config "newest-cni-718270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:15.887445  754019 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:15.887511  754019 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718270"
	I1014 21:20:15.887525  754019 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718270"
	W1014 21:20:15.887531  754019 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:20:15.887551  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.888069  754019 addons.go:69] Setting dashboard=true in profile "newest-cni-718270"
	I1014 21:20:15.888087  754019 addons.go:238] Setting addon dashboard=true in "newest-cni-718270"
	W1014 21:20:15.888093  754019 addons.go:247] addon dashboard should already be in state true
	I1014 21:20:15.888122  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:15.891768  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.892157  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.899140  754019 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718270"
	I1014 21:20:15.899179  754019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718270"
	I1014 21:20:15.899515  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:15.900224  754019 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:15.905445  754019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:16.011967  754019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:16.012056  754019 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:20:16.015069  754019 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:16.015094  754019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:16.015157  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.019125  754019 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:20:16.024059  754019 addons.go:238] Setting addon default-storageclass=true in "newest-cni-718270"
	W1014 21:20:16.024080  754019 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:20:16.024131  754019 host.go:66] Checking if "newest-cni-718270" exists ...
	I1014 21:20:16.024818  754019 cli_runner.go:164] Run: docker container inspect newest-cni-718270 --format={{.State.Status}}
	I1014 21:20:16.024968  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:20:16.024979  754019 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:20:16.025027  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.080715  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.136708  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.144655  754019 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:16.144676  754019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:20:16.144737  754019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718270
	I1014 21:20:16.197439  754019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/newest-cni-718270/id_rsa Username:docker}
	I1014 21:20:16.481649  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:20:16.481670  754019 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:20:16.502619  754019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:16.514590  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:16.549423  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:16.583623  754019 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:20:16.583698  754019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:20:16.590156  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:20:16.590180  754019 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:20:16.682575  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:20:16.682616  754019 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:20:16.819942  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:20:16.819963  754019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:20:16.902917  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:20:16.902939  754019 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:20:16.940597  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:20:16.940620  754019 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:20:16.973687  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:20:16.973706  754019 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:20:16.993947  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:20:16.993967  754019 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:20:17.013338  754019 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:17.013359  754019 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:20:17.029955  754019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:23.774881  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.260206047s)
	I1014 21:20:23.774951  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.225507569s)
	I1014 21:20:23.775284  754019 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.191572422s)
	I1014 21:20:23.775309  754019 api_server.go:72] duration metric: took 7.888282904s to wait for apiserver process to appear ...
	I1014 21:20:23.775316  754019 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:20:23.775333  754019 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:20:23.775631  754019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.745646637s)
	I1014 21:20:23.779064  754019 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-718270 addons enable metrics-server
	
	I1014 21:20:23.797987  754019 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 21:20:23.798020  754019 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 21:20:23.816514  754019 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1014 21:20:23.819414  754019 addons.go:514] duration metric: took 7.931959968s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1014 21:20:24.276076  754019 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 21:20:24.307874  754019 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 21:20:24.309041  754019 api_server.go:141] control plane version: v1.34.1
	I1014 21:20:24.309072  754019 api_server.go:131] duration metric: took 533.748771ms to wait for apiserver health ...
	I1014 21:20:24.309082  754019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:20:24.312890  754019 system_pods.go:59] 8 kube-system pods found
	I1014 21:20:24.312933  754019 system_pods.go:61] "coredns-66bc5c9577-bdhlq" [b96a6307-6a92-458c-8cbf-c5c102940d5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:24.312942  754019 system_pods.go:61] "etcd-newest-cni-718270" [f355740c-adf7-429e-a1da-7ce4543c4d04] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:24.312952  754019 system_pods.go:61] "kindnet-vdqbd" [5e5738b0-840c-4264-b49e-f9e41a70647e] Running
	I1014 21:20:24.312960  754019 system_pods.go:61] "kube-apiserver-newest-cni-718270" [d2e80a6a-1be9-4d68-8c1f-ca465eeb82ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:24.312973  754019 system_pods.go:61] "kube-controller-manager-newest-cni-718270" [66df72c2-6625-4150-bedb-4fbf0594354f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:24.312978  754019 system_pods.go:61] "kube-proxy-97kps" [c9808191-fd7e-4064-a7a4-f8d026004ef8] Running
	I1014 21:20:24.312991  754019 system_pods.go:61] "kube-scheduler-newest-cni-718270" [815541f5-8b42-4b45-b84d-3c1d72b0060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:24.312996  754019 system_pods.go:61] "storage-provisioner" [f93166f5-1e47-44fd-b4b9-95957da83455] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1014 21:20:24.313005  754019 system_pods.go:74] duration metric: took 3.915452ms to wait for pod list to return data ...
	I1014 21:20:24.313019  754019 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:20:24.316689  754019 default_sa.go:45] found service account: "default"
	I1014 21:20:24.316715  754019 default_sa.go:55] duration metric: took 3.689216ms for default service account to be created ...
	I1014 21:20:24.316728  754019 kubeadm.go:586] duration metric: took 8.429700271s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 21:20:24.316747  754019 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:20:24.321611  754019 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:20:24.321644  754019 node_conditions.go:123] node cpu capacity is 2
	I1014 21:20:24.321658  754019 node_conditions.go:105] duration metric: took 4.905061ms to run NodePressure ...
	I1014 21:20:24.321670  754019 start.go:241] waiting for startup goroutines ...
	I1014 21:20:24.321677  754019 start.go:246] waiting for cluster config update ...
	I1014 21:20:24.321706  754019 start.go:255] writing updated cluster config ...
	I1014 21:20:24.322007  754019 ssh_runner.go:195] Run: rm -f paused
	I1014 21:20:24.404075  754019 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:20:24.407291  754019 out.go:179] * Done! kubectl is now configured to use "newest-cni-718270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.200415808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.20795709Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=52716703-74e8-468c-8e43-5b589ceadc2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.228561648Z" level=info msg="Ran pod sandbox 3e9d16463311a2b2b031da250b23b12dcfca55d76b664780a2089e97e39b150a with infra container: kube-system/kindnet-vdqbd/POD" id=52716703-74e8-468c-8e43-5b589ceadc2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.229754805Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-97kps/POD" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.23182912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.233213567Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ce764dc4-b55c-4fac-8b05-0587785860d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.237224439Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.240988299Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d118b9e8-83d6-4cde-a7a4-81860ab60a61 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.244788475Z" level=info msg="Creating container: kube-system/kindnet-vdqbd/kindnet-cni" id=7f812a76-399f-4a84-b5a0-6e28bb21d365 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.245177103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.249653748Z" level=info msg="Ran pod sandbox 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c with infra container: kube-system/kube-proxy-97kps/POD" id=e6bc5ec6-2cee-417b-b8ec-96534ebe2b3c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.261403577Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ad58d34d-600d-42be-b24c-0fef412732b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.262540208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.264205784Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dfaf7952-f5d2-4a27-85ac-09f17cc72f98 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.265362592Z" level=info msg="Creating container: kube-system/kube-proxy-97kps/kube-proxy" id=44496353-daaa-402f-9622-ab27b7c002e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.265682311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.267164237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.281954095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.282817514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.324256246Z" level=info msg="Created container dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142: kube-system/kindnet-vdqbd/kindnet-cni" id=7f812a76-399f-4a84-b5a0-6e28bb21d365 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.325171071Z" level=info msg="Starting container: dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142" id=cbffa4b2-95c9-4e93-8010-f03a8acf3d4c name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.328663582Z" level=info msg="Started container" PID=1061 containerID=dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142 description=kube-system/kindnet-vdqbd/kindnet-cni id=cbffa4b2-95c9-4e93-8010-f03a8acf3d4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e9d16463311a2b2b031da250b23b12dcfca55d76b664780a2089e97e39b150a
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.367223495Z" level=info msg="Created container 1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f: kube-system/kube-proxy-97kps/kube-proxy" id=44496353-daaa-402f-9622-ab27b7c002e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.368227872Z" level=info msg="Starting container: 1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f" id=6658575a-829f-41de-8bee-76a129671bf9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:20:23 newest-cni-718270 crio[610]: time="2025-10-14T21:20:23.37744155Z" level=info msg="Started container" PID=1062 containerID=1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f description=kube-system/kube-proxy-97kps/kube-proxy id=6658575a-829f-41de-8bee-76a129671bf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1237eb5a0ce86       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   10a21d5b021e7       kube-proxy-97kps                            kube-system
	dbd481283f0a1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   3e9d16463311a       kindnet-vdqbd                               kube-system
	246522f174416       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   316b34ab42554       etcd-newest-cni-718270                      kube-system
	6e970e1d72789       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   d04ba1eef6bc7       kube-apiserver-newest-cni-718270            kube-system
	d1ce5d2ce7305       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   dbba3576276f3       kube-controller-manager-newest-cni-718270   kube-system
	f9f8c05de0666       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   a7b6f5c6ea3d3       kube-scheduler-newest-cni-718270            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=newest-cni-718270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718270
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 14 Oct 2025 21:20:22 +0000   Tue, 14 Oct 2025 21:19:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 15dd3ba01c1c42e29ade3f0e46280220
	  System UUID:                a5834ddd-d2a7-4491-bd6d-465132e0fdf2
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718270                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-vdqbd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-718270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-718270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-97kps                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-718270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-718270 event: Registered Node newest-cni-718270 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x2 over 16s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x2 over 16s)  kubelet          Node newest-cni-718270 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x2 over 16s)  kubelet          Node newest-cni-718270 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-718270 event: Registered Node newest-cni-718270 in Controller
	
	
	==> dmesg <==
	[Oct14 20:49] overlayfs: idmapped layers are currently not supported
	[ +15.861847] overlayfs: idmapped layers are currently not supported
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	[Oct14 21:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [246522f17441699f43d5b755d1b3968e04447f890b4feafb0eb44e40ec79cb4f] <==
	{"level":"warn","ts":"2025-10-14T21:20:19.981969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.010184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.075322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.091740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.108562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.134562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.173784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.198549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.222537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.250999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.312629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.337155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.373099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.409059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.434280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.477047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.521888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.585392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.631694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.701671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.725079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.766227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.794163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:20.959182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33724","server-name":"","error":"EOF"}
	2025/10/14 21:20:25 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 21:20:30 up  4:03,  0 user,  load average: 5.05, 3.32, 2.32
	Linux newest-cni-718270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dbd481283f0a185c6d4e4bca508c40b099a0b4c6da41d1ceb8c9df4e64ec7142] <==
	I1014 21:20:23.488935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:20:23.492033       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1014 21:20:23.495216       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:20:23.495305       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:20:23.495344       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:20:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:20:23.708076       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:20:23.708282       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:20:23.708393       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:20:23.710342       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6e970e1d72789f798808d196b5c8410d1717b9fb6b756584e8ef0b370e39b72c] <==
	I1014 21:20:22.529900       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:20:22.529996       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 21:20:22.530121       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:20:22.530170       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:20:22.542739       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:20:22.556075       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1014 21:20:22.608136       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:20:22.975218       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:20:22.993501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:20:23.129369       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:20:23.318885       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:20:23.426240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:20:23.458252       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:20:23.649256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.172.115"}
	I1014 21:20:23.723891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.219.188"}
	{"level":"warn","ts":"2025-10-14T21:20:25.601626Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013bb4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1014 21:20:25.601993       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.602163       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.602073       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.928µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 21:20:25.603343       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 21:20:25.603510       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.735974ms" method="PATCH" path="/api/v1/namespaces/default/events/newest-cni-718270.186e7855f5e2fd55" result=null
	I1014 21:20:25.824349       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 21:20:25.873669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 21:20:25.927108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:20:25.976256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1ce5d2ce7305a0112063ea79c98bdb64bb308e7ed50e0f0ef42ef9c62ad6b76] <==
	I1014 21:20:25.426076       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1014 21:20:25.427860       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 21:20:25.431523       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:20:25.431661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:25.431709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:20:25.431836       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:20:25.431936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-718270"
	I1014 21:20:25.432005       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 21:20:25.434279       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:20:25.467285       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 21:20:25.467341       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1014 21:20:25.469570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1014 21:20:25.471914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 21:20:25.472517       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 21:20:25.476420       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:20:25.476478       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1014 21:20:25.476505       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 21:20:25.476541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 21:20:25.476552       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1014 21:20:25.476558       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1014 21:20:25.478922       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1014 21:20:25.488098       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:25.488293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 21:20:25.490152       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 21:20:25.490243       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1237eb5a0ce86df8135e2cfe766f31419181e360858f38b8268491a866b4c28f] <==
	I1014 21:20:23.787760       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:20:23.968147       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:20:24.270939       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:20:24.272283       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1014 21:20:24.280421       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:20:24.317878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:20:24.317975       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:20:24.325910       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:20:24.326293       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:20:24.326643       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:24.327934       1 config.go:200] "Starting service config controller"
	I1014 21:20:24.327993       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:20:24.328036       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:20:24.328087       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:20:24.328125       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:20:24.328179       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:20:24.329395       1 config.go:309] "Starting node config controller"
	I1014 21:20:24.329468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:20:24.330773       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:20:24.444031       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:20:24.444063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:20:24.444071       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f9f8c05de06667afe6beb9fb43f7e7bb59e3bb9721fbd64688bcf0e60d2e0130] <==
	I1014 21:20:19.884364       1 serving.go:386] Generated self-signed cert in-memory
	I1014 21:20:24.248959       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 21:20:24.248992       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:24.256243       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:20:24.256458       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 21:20:24.256515       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 21:20:24.256567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 21:20:24.257785       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.257858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.257909       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:20:24.259569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 21:20:24.357751       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 21:20:24.357970       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:24.359801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:20:21 newest-cni-718270 kubelet[728]: E1014 21:20:21.291088     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-718270\" not found" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.073875     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.620646     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718270\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.620682     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.644752     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718270\" already exists" pod="kube-system/kube-scheduler-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644801     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644811     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644896     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.644922     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.645993     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.686633     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-718270\" already exists" pod="kube-system/etcd-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.686671     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: E1014 21:20:22.720547     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718270\" already exists" pod="kube-system/kube-apiserver-newest-cni-718270"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.852405     728 apiserver.go:52] "Watching apiserver"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.880780     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965869     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-xtables-lock\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965930     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-xtables-lock\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965962     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-lib-modules\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.965994     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9808191-fd7e-4064-a7a4-f8d026004ef8-lib-modules\") pod \"kube-proxy-97kps\" (UID: \"c9808191-fd7e-4064-a7a4-f8d026004ef8\") " pod="kube-system/kube-proxy-97kps"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.966028     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5e5738b0-840c-4264-b49e-f9e41a70647e-cni-cfg\") pod \"kindnet-vdqbd\" (UID: \"5e5738b0-840c-4264-b49e-f9e41a70647e\") " pod="kube-system/kindnet-vdqbd"
	Oct 14 21:20:22 newest-cni-718270 kubelet[728]: I1014 21:20:22.997962     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 14 21:20:23 newest-cni-718270 kubelet[728]: W1014 21:20:23.244299     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9615898dd977fc1456260e5b4d36b42dac4c00578c732f0ea5edbae0504c0370/crio-10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c WatchSource:0}: Error finding container 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c: Status 404 returned error can't find the container with id 10a21d5b021e740ac2ad1e6a25d5b8e96172cf83752c737c0f74f00a2961f62c
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:20:25 newest-cni-718270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718270 -n newest-cni-718270
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718270 -n newest-cni-718270: exit status 2 (475.9931ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9: exit status 1 (120.962791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bdhlq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-47b48" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-r7dw9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718270 describe pod coredns-66bc5c9577-bdhlq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-47b48 kubernetes-dashboard-855c9754f9-r7dw9: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-970287 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-970287 --alsologtostderr -v=1: exit status 80 (1.917815157s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-970287 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:21:44.566962  762551 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:21:44.567131  762551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:21:44.567161  762551 out.go:374] Setting ErrFile to fd 2...
	I1014 21:21:44.567180  762551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:21:44.567443  762551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:21:44.567721  762551 out.go:368] Setting JSON to false
	I1014 21:21:44.567771  762551 mustload.go:65] Loading cluster: no-preload-970287
	I1014 21:21:44.568162  762551 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:21:44.568742  762551 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:21:44.587128  762551 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:21:44.587447  762551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:21:44.646050  762551 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 21:21:44.636730204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:21:44.646788  762551 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-970287 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1014 21:21:44.650282  762551 out.go:179] * Pausing node no-preload-970287 ... 
	I1014 21:21:44.653207  762551 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:21:44.653570  762551 ssh_runner.go:195] Run: systemctl --version
	I1014 21:21:44.653621  762551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:21:44.684599  762551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:21:44.787283  762551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:21:44.801724  762551 pause.go:52] kubelet running: true
	I1014 21:21:44.801803  762551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:21:45.060038  762551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:21:45.060137  762551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:21:45.166928  762551 cri.go:89] found id: "4bcc6a658ca4f326964a68891e92ff5132a7a58c21942626f11381fb247fb86f"
	I1014 21:21:45.166960  762551 cri.go:89] found id: "9d950fd859ff930a00c330f302d797f5be27bec2d4c5ef9580c021fce5851d5d"
	I1014 21:21:45.166966  762551 cri.go:89] found id: "b026968d5f83c6d9b149d2641a2ce45e5705822b62c6b16cfc79a450a3e1b43b"
	I1014 21:21:45.166970  762551 cri.go:89] found id: "e9a7ae186819ae864bbab79c2d91361c9ece7809280dd662a139a19b9dad3324"
	I1014 21:21:45.166974  762551 cri.go:89] found id: "eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416"
	I1014 21:21:45.166978  762551 cri.go:89] found id: "cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400"
	I1014 21:21:45.166982  762551 cri.go:89] found id: "882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7"
	I1014 21:21:45.166985  762551 cri.go:89] found id: "27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc"
	I1014 21:21:45.166988  762551 cri.go:89] found id: "ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a"
	I1014 21:21:45.166996  762551 cri.go:89] found id: "1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	I1014 21:21:45.167022  762551 cri.go:89] found id: "b7e2a8d9fee3df01c06c63bcdfedfa290e5f10824bd890a50ba040a29633a6ff"
	I1014 21:21:45.167032  762551 cri.go:89] found id: ""
	I1014 21:21:45.167102  762551 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:21:45.193261  762551 retry.go:31] will retry after 253.695128ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:21:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:21:45.447775  762551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:21:45.461826  762551 pause.go:52] kubelet running: false
	I1014 21:21:45.461920  762551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:21:45.642872  762551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:21:45.642964  762551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:21:45.728066  762551 cri.go:89] found id: "4bcc6a658ca4f326964a68891e92ff5132a7a58c21942626f11381fb247fb86f"
	I1014 21:21:45.728131  762551 cri.go:89] found id: "9d950fd859ff930a00c330f302d797f5be27bec2d4c5ef9580c021fce5851d5d"
	I1014 21:21:45.728152  762551 cri.go:89] found id: "b026968d5f83c6d9b149d2641a2ce45e5705822b62c6b16cfc79a450a3e1b43b"
	I1014 21:21:45.728173  762551 cri.go:89] found id: "e9a7ae186819ae864bbab79c2d91361c9ece7809280dd662a139a19b9dad3324"
	I1014 21:21:45.728191  762551 cri.go:89] found id: "eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416"
	I1014 21:21:45.728225  762551 cri.go:89] found id: "cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400"
	I1014 21:21:45.728244  762551 cri.go:89] found id: "882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7"
	I1014 21:21:45.728264  762551 cri.go:89] found id: "27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc"
	I1014 21:21:45.728300  762551 cri.go:89] found id: "ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a"
	I1014 21:21:45.728357  762551 cri.go:89] found id: "1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	I1014 21:21:45.728376  762551 cri.go:89] found id: "b7e2a8d9fee3df01c06c63bcdfedfa290e5f10824bd890a50ba040a29633a6ff"
	I1014 21:21:45.728394  762551 cri.go:89] found id: ""
	I1014 21:21:45.728489  762551 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:21:45.739971  762551 retry.go:31] will retry after 399.364162ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:21:45Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:21:46.139569  762551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:21:46.153477  762551 pause.go:52] kubelet running: false
	I1014 21:21:46.153546  762551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1014 21:21:46.327770  762551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1014 21:21:46.327843  762551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1014 21:21:46.397298  762551 cri.go:89] found id: "4bcc6a658ca4f326964a68891e92ff5132a7a58c21942626f11381fb247fb86f"
	I1014 21:21:46.397322  762551 cri.go:89] found id: "9d950fd859ff930a00c330f302d797f5be27bec2d4c5ef9580c021fce5851d5d"
	I1014 21:21:46.397327  762551 cri.go:89] found id: "b026968d5f83c6d9b149d2641a2ce45e5705822b62c6b16cfc79a450a3e1b43b"
	I1014 21:21:46.397331  762551 cri.go:89] found id: "e9a7ae186819ae864bbab79c2d91361c9ece7809280dd662a139a19b9dad3324"
	I1014 21:21:46.397334  762551 cri.go:89] found id: "eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416"
	I1014 21:21:46.397338  762551 cri.go:89] found id: "cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400"
	I1014 21:21:46.397341  762551 cri.go:89] found id: "882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7"
	I1014 21:21:46.397344  762551 cri.go:89] found id: "27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc"
	I1014 21:21:46.397347  762551 cri.go:89] found id: "ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a"
	I1014 21:21:46.397375  762551 cri.go:89] found id: "1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	I1014 21:21:46.397387  762551 cri.go:89] found id: "b7e2a8d9fee3df01c06c63bcdfedfa290e5f10824bd890a50ba040a29633a6ff"
	I1014 21:21:46.397392  762551 cri.go:89] found id: ""
	I1014 21:21:46.397457  762551 ssh_runner.go:195] Run: sudo runc list -f json
	I1014 21:21:46.413114  762551 out.go:203] 
	W1014 21:21:46.415996  762551 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:21:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:21:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1014 21:21:46.416016  762551 out.go:285] * 
	* 
	W1014 21:21:46.423960  762551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 21:21:46.426944  762551 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-970287 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-970287
helpers_test.go:243: (dbg) docker inspect no-preload-970287:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	        "Created": "2025-10-14T21:18:55.141228098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:20:32.128940448Z",
	            "FinishedAt": "2025-10-14T21:20:31.021404108Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hosts",
	        "LogPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497-json.log",
	        "Name": "/no-preload-970287",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970287:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970287",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	                "LowerDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970287",
	                "Source": "/var/lib/docker/volumes/no-preload-970287/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970287",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970287",
	                "name.minikube.sigs.k8s.io": "no-preload-970287",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adc6ea98da52cd00f8303436c078b72f20afbb5c95934de5c28feae8da074fc2",
	            "SandboxKey": "/var/run/docker/netns/adc6ea98da52",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970287": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:01:75:59:74:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "006eb97cac5b79644e6bd32a155d4cd1ef5c55fddbcd2356a8ffe605bdf15f5f",
	                    "EndpointID": "91ff71b275316083cb9c427ace956a1d4d50e01fcc0d3bf73caa797921a70544",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970287",
	                        "9ce7c6b3a1e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287: exit status 2 (358.226792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970287 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-970287 logs -n 25: (1.365237608s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p newest-cni-718270 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p no-preload-970287 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ newest-cni-718270 image list --format=json                                                                                                                                                                                                    │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ pause   │ -p newest-cni-718270 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-970287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:21 UTC │
	│ delete  │ -p newest-cni-718270                                                                                                                                                                                                                          │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ delete  │ -p newest-cni-718270                                                                                                                                                                                                                          │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p auto-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-810782                  │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ image   │ no-preload-970287 image list --format=json                                                                                                                                                                                                    │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:21 UTC │ 14 Oct 25 21:21 UTC │
	│ pause   │ -p no-preload-970287 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:20:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:20:34.223645  758490 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:34.223755  758490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:34.223765  758490 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:34.223770  758490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:34.224329  758490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:34.224803  758490 out.go:368] Setting JSON to false
	I1014 21:20:34.225755  758490 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14585,"bootTime":1760462250,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:20:34.225843  758490 start.go:141] virtualization:  
	I1014 21:20:34.229757  758490 out.go:179] * [auto-810782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:20:34.232977  758490 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:20:34.233094  758490 notify.go:220] Checking for updates...
	I1014 21:20:34.239750  758490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:20:34.243026  758490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:34.246503  758490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:20:34.249457  758490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:20:34.252579  758490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:20:34.256076  758490 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:34.256212  758490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:20:34.279675  758490 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:20:34.279809  758490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:34.338222  758490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:20:34.328662707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:34.338332  758490 docker.go:318] overlay module found
	I1014 21:20:34.341561  758490 out.go:179] * Using the docker driver based on user configuration
	I1014 21:20:34.344466  758490 start.go:305] selected driver: docker
	I1014 21:20:34.344485  758490 start.go:925] validating driver "docker" against <nil>
	I1014 21:20:34.344515  758490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:20:34.345237  758490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:34.400129  758490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:20:34.390856887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:34.400283  758490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:20:34.400589  758490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:20:34.403585  758490 out.go:179] * Using Docker driver with root privileges
	I1014 21:20:34.406351  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:20:34.406417  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:34.406433  758490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:20:34.406507  758490 start.go:349] cluster config:
	{Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1014 21:20:34.409571  758490 out.go:179] * Starting "auto-810782" primary control-plane node in "auto-810782" cluster
	I1014 21:20:34.412350  758490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:20:34.415196  758490 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:20:34.418004  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:34.418050  758490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:20:34.418061  758490 cache.go:58] Caching tarball of preloaded images
	I1014 21:20:34.418110  758490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:20:34.418160  758490 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:20:34.418170  758490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:20:34.418271  758490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json ...
	I1014 21:20:34.418288  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json: {Name:mkf5766e2d9b5ae306968b16baf59991fd0a7831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:34.436559  758490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:20:34.436582  758490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:20:34.436603  758490 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:20:34.436624  758490 start.go:360] acquireMachinesLock for auto-810782: {Name:mk46b2543949e29c5e430185a89cc9cdf7ed96d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:20:34.436728  758490 start.go:364] duration metric: took 83.227µs to acquireMachinesLock for "auto-810782"
	I1014 21:20:34.436759  758490 start.go:93] Provisioning new machine with config: &{Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:34.436831  758490 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:20:32.087720  757567 out.go:252] * Restarting existing docker container for "no-preload-970287" ...
	I1014 21:20:32.087808  757567 cli_runner.go:164] Run: docker start no-preload-970287
	I1014 21:20:32.397796  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:32.426176  757567 kic.go:430] container "no-preload-970287" state is running.
	I1014 21:20:32.426537  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:32.450603  757567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:20:32.450838  757567 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:32.450906  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:32.473940  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:32.474265  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:32.474284  757567 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:32.474951  757567 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 21:20:35.628058  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:20:35.628100  757567 ubuntu.go:182] provisioning hostname "no-preload-970287"
	I1014 21:20:35.628214  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:35.647554  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:35.647884  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:35.647902  757567 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-970287 && echo "no-preload-970287" | sudo tee /etc/hostname
	I1014 21:20:35.810491  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:20:35.810572  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:35.832669  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:35.833003  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:35.833028  757567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970287/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:35.984869  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:35.984944  757567 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:35.985007  757567 ubuntu.go:190] setting up certificates
	I1014 21:20:35.985034  757567 provision.go:84] configureAuth start
	I1014 21:20:35.985127  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:36.008725  757567 provision.go:143] copyHostCerts
	I1014 21:20:36.008797  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:36.008807  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:36.008892  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:36.008995  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:36.009007  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:36.009037  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:36.009089  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:36.009093  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:36.009117  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:36.009163  757567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.no-preload-970287 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970287]
	I1014 21:20:36.356328  757567 provision.go:177] copyRemoteCerts
	I1014 21:20:36.356437  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:36.356506  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.374623  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:36.476891  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:36.496564  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:20:36.521182  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:36.546095  757567 provision.go:87] duration metric: took 561.023061ms to configureAuth
	I1014 21:20:36.546121  757567 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:36.546307  757567 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:36.546415  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.568664  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:36.568980  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:36.568999  757567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:34.440107  758490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:20:34.440365  758490 start.go:159] libmachine.API.Create for "auto-810782" (driver="docker")
	I1014 21:20:34.440416  758490 client.go:168] LocalClient.Create starting
	I1014 21:20:34.440491  758490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:20:34.440527  758490 main.go:141] libmachine: Decoding PEM data...
	I1014 21:20:34.440544  758490 main.go:141] libmachine: Parsing certificate...
	I1014 21:20:34.440607  758490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:20:34.440631  758490 main.go:141] libmachine: Decoding PEM data...
	I1014 21:20:34.440645  758490 main.go:141] libmachine: Parsing certificate...
	I1014 21:20:34.441001  758490 cli_runner.go:164] Run: docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:20:34.456556  758490 cli_runner.go:211] docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:20:34.456643  758490 network_create.go:284] running [docker network inspect auto-810782] to gather additional debugging logs...
	I1014 21:20:34.456664  758490 cli_runner.go:164] Run: docker network inspect auto-810782
	W1014 21:20:34.472640  758490 cli_runner.go:211] docker network inspect auto-810782 returned with exit code 1
	I1014 21:20:34.472670  758490 network_create.go:287] error running [docker network inspect auto-810782]: docker network inspect auto-810782: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-810782 not found
	I1014 21:20:34.472684  758490 network_create.go:289] output of [docker network inspect auto-810782]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-810782 not found
	
	** /stderr **
	I1014 21:20:34.472785  758490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:34.490489  758490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:20:34.490817  758490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:20:34.491181  758490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:20:34.491446  758490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-006eb97cac5b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:d4:13:6b:f6:4c} reservation:<nil>}
	I1014 21:20:34.491878  758490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a547e0}
	I1014 21:20:34.491902  758490 network_create.go:124] attempt to create docker network auto-810782 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:20:34.491956  758490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-810782 auto-810782
	I1014 21:20:34.542459  758490 network_create.go:108] docker network auto-810782 192.168.85.0/24 created
	I1014 21:20:34.542497  758490 kic.go:121] calculated static IP "192.168.85.2" for the "auto-810782" container
	I1014 21:20:34.542567  758490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:20:34.558539  758490 cli_runner.go:164] Run: docker volume create auto-810782 --label name.minikube.sigs.k8s.io=auto-810782 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:20:34.576270  758490 oci.go:103] Successfully created a docker volume auto-810782
	I1014 21:20:34.576387  758490 cli_runner.go:164] Run: docker run --rm --name auto-810782-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810782 --entrypoint /usr/bin/test -v auto-810782:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:20:35.081442  758490 oci.go:107] Successfully prepared a docker volume auto-810782
	I1014 21:20:35.081513  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:35.081536  758490 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:20:35.081622  758490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810782:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:20:36.941124  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:36.941197  757567 machine.go:96] duration metric: took 4.490331779s to provisionDockerMachine
	I1014 21:20:36.941223  757567 start.go:293] postStartSetup for "no-preload-970287" (driver="docker")
	I1014 21:20:36.941265  757567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:36.941364  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:36.941439  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.969205  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.077483  757567 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:37.081669  757567 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:37.081700  757567 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:37.081712  757567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:37.081784  757567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:37.081877  757567 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:37.081977  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:37.091262  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:37.112538  757567 start.go:296] duration metric: took 171.284368ms for postStartSetup
	I1014 21:20:37.112662  757567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:37.112750  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.130825  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.233268  757567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:37.237861  757567 fix.go:56] duration metric: took 5.175904148s for fixHost
	I1014 21:20:37.237903  757567 start.go:83] releasing machines lock for "no-preload-970287", held for 5.175955309s
	I1014 21:20:37.237986  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:37.255571  757567 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:37.255636  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.255956  757567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:37.256038  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.281422  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.292534  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.392282  757567 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:37.496794  757567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:37.544708  757567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:37.549363  757567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:37.549435  757567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:37.557247  757567 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:20:37.557272  757567 start.go:495] detecting cgroup driver to use...
	I1014 21:20:37.557305  757567 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:37.557353  757567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:37.572789  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:37.586834  757567 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:37.586896  757567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:37.603186  757567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:37.617411  757567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:37.754960  757567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:37.903262  757567 docker.go:234] disabling docker service ...
	I1014 21:20:37.903324  757567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:37.921061  757567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:37.934760  757567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:38.114853  757567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:38.262997  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:38.280254  757567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:38.296047  757567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:38.296110  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.305436  757567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:38.305503  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.314596  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.324937  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.334028  757567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:38.342408  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.352696  757567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.361192  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.370085  757567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:38.378393  757567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:38.386996  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:38.554906  757567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:40.467729  757567 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.912792426s)
	I1014 21:20:40.467759  757567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:40.467809  757567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:40.475298  757567 start.go:563] Will wait 60s for crictl version
	I1014 21:20:40.475382  757567 ssh_runner.go:195] Run: which crictl
	I1014 21:20:40.479444  757567 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:40.527163  757567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:40.527256  757567 ssh_runner.go:195] Run: crio --version
	I1014 21:20:40.564160  757567 ssh_runner.go:195] Run: crio --version
	I1014 21:20:40.621239  757567 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:40.624274  757567 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:40.650222  757567 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:40.654298  757567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:40.665597  757567 kubeadm.go:883] updating cluster {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:40.665711  757567 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:40.665753  757567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:40.712243  757567 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:40.712280  757567 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:40.712288  757567 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:40.712399  757567 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-970287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:40.712497  757567 ssh_runner.go:195] Run: crio config
	I1014 21:20:40.800099  757567 cni.go:84] Creating CNI manager for ""
	I1014 21:20:40.800122  757567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:40.800144  757567 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:20:40.800167  757567 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970287 NodeName:no-preload-970287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:40.800339  757567 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-970287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:40.800406  757567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:40.817484  757567 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:40.817557  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:40.827162  757567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:20:40.848965  757567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:40.880534  757567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1014 21:20:40.905344  757567 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:40.911779  757567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:40.940252  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:41.213519  757567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:41.246871  757567 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287 for IP: 192.168.76.2
	I1014 21:20:41.246893  757567 certs.go:195] generating shared ca certs ...
	I1014 21:20:41.246908  757567 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:41.247038  757567 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:41.247092  757567 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:41.247103  757567 certs.go:257] generating profile certs ...
	I1014 21:20:41.247191  757567 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.key
	I1014 21:20:41.247282  757567 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f
	I1014 21:20:41.247329  757567 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key
	I1014 21:20:41.247450  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:41.247484  757567 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:41.247497  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:41.247523  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:41.247568  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:41.247596  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:41.247643  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:41.248238  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:41.290128  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:41.323294  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:41.419092  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:41.509972  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:20:41.564114  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:20:41.583289  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:41.618451  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:20:41.665657  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:41.734908  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:41.797747  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:41.822117  757567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:41.836291  757567 ssh_runner.go:195] Run: openssl version
	I1014 21:20:41.846515  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:41.857373  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.864781  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.864863  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.940535  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:41.949218  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:41.959049  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:41.964210  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:41.964374  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:42.017940  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:42.030007  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:42.041700  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.047190  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.047273  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.096725  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:42.107078  757567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:42.114380  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:20:42.163389  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:20:42.240837  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:20:42.309169  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:20:42.382780  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:20:42.476041  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:20:42.582660  757567 kubeadm.go:400] StartCluster: {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:42.582802  757567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:42.582914  757567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:42.640217  757567 cri.go:89] found id: "cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400"
	I1014 21:20:42.640291  757567 cri.go:89] found id: "882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7"
	I1014 21:20:42.640332  757567 cri.go:89] found id: "27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc"
	I1014 21:20:42.640361  757567 cri.go:89] found id: "ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a"
	I1014 21:20:42.640378  757567 cri.go:89] found id: ""
	I1014 21:20:42.640461  757567 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:20:42.662774  757567 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:42Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:42.662933  757567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:42.678175  757567 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:20:42.678208  757567 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:20:42.678262  757567 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:20:42.690181  757567 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:20:42.690638  757567 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-970287" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:42.690791  757567 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-970287" cluster setting kubeconfig missing "no-preload-970287" context setting]
	I1014 21:20:42.691117  757567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.692756  757567 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:20:42.708506  757567 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1014 21:20:42.708536  757567 kubeadm.go:601] duration metric: took 30.321757ms to restartPrimaryControlPlane
	I1014 21:20:42.708544  757567 kubeadm.go:402] duration metric: took 125.895629ms to StartCluster
	I1014 21:20:42.708559  757567 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.708618  757567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:42.709207  757567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.709389  757567 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:42.709656  757567 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:42.709726  757567 addons.go:69] Setting storage-provisioner=true in profile "no-preload-970287"
	I1014 21:20:42.709740  757567 addons.go:238] Setting addon storage-provisioner=true in "no-preload-970287"
	W1014 21:20:42.709745  757567 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:20:42.709769  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.710207  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.711359  757567 addons.go:69] Setting dashboard=true in profile "no-preload-970287"
	I1014 21:20:42.711411  757567 addons.go:238] Setting addon dashboard=true in "no-preload-970287"
	W1014 21:20:42.711432  757567 addons.go:247] addon dashboard should already be in state true
	I1014 21:20:42.711496  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.711952  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.717315  757567 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:42.717429  757567 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:42.722229  757567 addons.go:69] Setting default-storageclass=true in profile "no-preload-970287"
	I1014 21:20:42.722263  757567 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970287"
	I1014 21:20:42.722614  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.722858  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:42.763222  757567 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:42.766152  757567 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:42.766184  757567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:42.766248  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.787847  757567 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:20:42.791450  757567 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:20:40.329199  758490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810782:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.247540627s)
	I1014 21:20:40.329230  758490 kic.go:203] duration metric: took 5.247690136s to extract preloaded images to volume ...
	W1014 21:20:40.329367  758490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:20:40.329478  758490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:20:40.417230  758490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-810782 --name auto-810782 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810782 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-810782 --network auto-810782 --ip 192.168.85.2 --volume auto-810782:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:20:40.790346  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Running}}
	I1014 21:20:40.815372  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:40.843061  758490 cli_runner.go:164] Run: docker exec auto-810782 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:20:40.911957  758490 oci.go:144] the created container "auto-810782" has a running status.
	I1014 21:20:40.911977  758490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa...
	I1014 21:20:41.325483  758490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:20:41.350301  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:41.368982  758490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:20:41.369008  758490 kic_runner.go:114] Args: [docker exec --privileged auto-810782 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:20:41.447713  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:41.477565  758490 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:41.477659  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:41.506039  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:41.506391  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:41.506408  758490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:41.507084  758490 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60380->127.0.0.1:33832: read: connection reset by peer
	I1014 21:20:42.796936  757567 addons.go:238] Setting addon default-storageclass=true in "no-preload-970287"
	W1014 21:20:42.796957  757567 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:20:42.796982  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.797451  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.797786  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:20:42.797804  757567 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:20:42.797849  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.802100  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:42.848433  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:42.848461  757567 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:42.848482  757567 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:20:42.848545  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.876376  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:43.039464  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:43.078191  757567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:43.085791  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:20:43.085859  757567 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:20:43.103257  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:43.159340  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:20:43.159366  757567 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:20:43.237884  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:20:43.237912  757567 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:20:43.334200  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:20:43.334227  757567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:20:43.352094  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:20:43.352121  757567 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:20:43.373705  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:20:43.373737  757567 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:20:43.392529  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:20:43.392571  757567 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:20:43.413513  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:20:43.413539  757567 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:20:43.430765  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:43.430794  757567 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:20:43.453827  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:44.700184  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810782
	
	I1014 21:20:44.700205  758490 ubuntu.go:182] provisioning hostname "auto-810782"
	I1014 21:20:44.700284  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:44.750482  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:44.750788  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:44.750799  758490 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-810782 && echo "auto-810782" | sudo tee /etc/hostname
	I1014 21:20:44.945884  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810782
	
	I1014 21:20:44.945969  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:44.988681  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:44.988999  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:44.989015  758490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-810782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-810782/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-810782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:45.242201  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:45.242309  758490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:45.242344  758490 ubuntu.go:190] setting up certificates
	I1014 21:20:45.242384  758490 provision.go:84] configureAuth start
	I1014 21:20:45.242665  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:45.286182  758490 provision.go:143] copyHostCerts
	I1014 21:20:45.286259  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:45.286270  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:45.286373  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:45.286535  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:45.286546  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:45.286581  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:45.286655  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:45.286661  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:45.286686  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:45.286812  758490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.auto-810782 san=[127.0.0.1 192.168.85.2 auto-810782 localhost minikube]
	I1014 21:20:45.477961  758490 provision.go:177] copyRemoteCerts
	I1014 21:20:45.478060  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:45.478125  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:45.497213  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:45.617262  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:45.658796  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1014 21:20:45.686167  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:45.720595  758490 provision.go:87] duration metric: took 478.160894ms to configureAuth
	I1014 21:20:45.720622  758490 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:45.720850  758490 config.go:182] Loaded profile config "auto-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:45.721020  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:45.762407  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:45.762744  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:45.762766  758490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:46.137032  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:46.137069  758490 machine.go:96] duration metric: took 4.659478967s to provisionDockerMachine
	I1014 21:20:46.137078  758490 client.go:171] duration metric: took 11.696651136s to LocalClient.Create
	I1014 21:20:46.137092  758490 start.go:167] duration metric: took 11.696727741s to libmachine.API.Create "auto-810782"
	I1014 21:20:46.137126  758490 start.go:293] postStartSetup for "auto-810782" (driver="docker")
	I1014 21:20:46.137144  758490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:46.137247  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:46.137327  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.185646  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.301126  758490 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:46.306832  758490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:46.306867  758490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:46.306879  758490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:46.306934  758490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:46.307017  758490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:46.307134  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:46.320569  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:46.349015  758490 start.go:296] duration metric: took 211.85106ms for postStartSetup
	I1014 21:20:46.349383  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:46.377904  758490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json ...
	I1014 21:20:46.378192  758490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:46.378244  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.409945  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.521574  758490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:46.528941  758490 start.go:128] duration metric: took 12.09209582s to createHost
	I1014 21:20:46.529008  758490 start.go:83] releasing machines lock for "auto-810782", held for 12.092265547s
	I1014 21:20:46.529124  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:46.559968  758490 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:46.560024  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.560261  758490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:46.560351  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.594501  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.610168  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.713426  758490 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:46.833374  758490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:46.907285  758490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:46.912478  758490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:46.912550  758490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:46.969841  758490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:20:46.969868  758490 start.go:495] detecting cgroup driver to use...
	I1014 21:20:46.969903  758490 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:46.969958  758490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:46.995925  758490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:47.015403  758490 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:47.015481  758490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:47.034817  758490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:47.054874  758490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:47.277340  758490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:47.454151  758490 docker.go:234] disabling docker service ...
	I1014 21:20:47.454270  758490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:47.486089  758490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:47.505165  758490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:47.731741  758490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:47.941374  758490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:47.956249  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:47.980987  758490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:47.981103  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:47.997695  758490 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:47.997812  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.011147  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.027140  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.042206  758490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:48.058658  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.075265  758490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.097781  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.107989  758490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:48.123519  758490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:48.133626  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:48.320790  758490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:48.519901  758490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:48.520011  758490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:48.528071  758490 start.go:563] Will wait 60s for crictl version
	I1014 21:20:48.528184  758490 ssh_runner.go:195] Run: which crictl
	I1014 21:20:48.535424  758490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:48.594222  758490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:48.594329  758490 ssh_runner.go:195] Run: crio --version
	I1014 21:20:48.663307  758490 ssh_runner.go:195] Run: crio --version
	I1014 21:20:48.712343  758490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:48.715288  758490 cli_runner.go:164] Run: docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:48.738546  758490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:48.742594  758490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:48.763017  758490 kubeadm.go:883] updating cluster {Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:48.763136  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:48.763188  758490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:48.822344  758490 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:48.822372  758490 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:20:48.822428  758490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:48.865307  758490 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:48.865335  758490 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:48.865343  758490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:48.865433  758490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-810782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:48.865526  758490 ssh_runner.go:195] Run: crio config
	I1014 21:20:48.957944  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:20:48.957971  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:48.957991  758490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:20:48.958014  758490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-810782 NodeName:auto-810782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:48.958145  758490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-810782"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:48.958223  758490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:48.973608  758490 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:48.973695  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:48.983922  758490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1014 21:20:49.004628  758490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:49.032285  758490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1014 21:20:49.050506  758490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:49.056870  758490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:49.072747  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:51.819048  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.77955122s)
	I1014 21:20:51.819103  757567 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.74089327s)
	I1014 21:20:51.819133  757567 node_ready.go:35] waiting up to 6m0s for node "no-preload-970287" to be "Ready" ...
	I1014 21:20:51.819452  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.716125907s)
	I1014 21:20:51.871888  757567 node_ready.go:49] node "no-preload-970287" is "Ready"
	I1014 21:20:51.871911  757567 node_ready.go:38] duration metric: took 52.759451ms for node "no-preload-970287" to be "Ready" ...
	I1014 21:20:51.871924  757567 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:20:51.871976  757567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:20:52.003540  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.549666044s)
	I1014 21:20:52.003894  757567 api_server.go:72] duration metric: took 9.29447948s to wait for apiserver process to appear ...
	I1014 21:20:52.003945  757567 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:20:52.003983  757567 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:20:52.007341  757567 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-970287 addons enable metrics-server
	
	I1014 21:20:52.010348  757567 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1014 21:20:49.254523  758490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:49.272147  758490 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782 for IP: 192.168.85.2
	I1014 21:20:49.272173  758490 certs.go:195] generating shared ca certs ...
	I1014 21:20:49.272189  758490 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.272357  758490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:49.272402  758490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:49.272415  758490 certs.go:257] generating profile certs ...
	I1014 21:20:49.272489  758490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key
	I1014 21:20:49.272505  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt with IP's: []
	I1014 21:20:49.472584  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt ...
	I1014 21:20:49.472616  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: {Name:mkbc239d2a8ad48649f591f416c75ebc1b4dcbb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.472801  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key ...
	I1014 21:20:49.472816  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key: {Name:mkbe66273e5658d249652fdf3003b5dbf104a156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.472903  758490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d
	I1014 21:20:49.472924  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:20:50.114544  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d ...
	I1014 21:20:50.114580  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d: {Name:mka7917892c64e37d3102e6d528343aab6538fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:50.114783  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d ...
	I1014 21:20:50.114802  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d: {Name:mk2e3400af979d440f9d862b57cc7e5c38cbc0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:50.114894  758490 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt
	I1014 21:20:50.114981  758490 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key
	I1014 21:20:50.115045  758490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key
	I1014 21:20:50.115063  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt with IP's: []
	I1014 21:20:51.209286  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt ...
	I1014 21:20:51.209320  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt: {Name:mk8b4f82003bf5e61833421ff91e9236fac30774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:51.209527  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key ...
	I1014 21:20:51.209540  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key: {Name:mke0e0ab5d8a9d7aed25e411cccbee7a0f0211e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:51.209747  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:51.209801  758490 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:51.209814  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:51.209843  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:51.209886  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:51.209911  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:51.209960  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:51.210604  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:51.228940  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:51.246625  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:51.273420  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:51.301913  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 21:20:51.337446  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:20:51.368822  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:51.401591  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:20:51.438202  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:51.461489  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:51.493385  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:51.520860  758490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:51.542019  758490 ssh_runner.go:195] Run: openssl version
	I1014 21:20:51.552286  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:51.565070  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.574770  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.574854  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.619166  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:51.628035  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:51.636623  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.640987  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.641064  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.683674  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:51.697280  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:51.706180  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.713027  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.713112  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.772120  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:51.785476  758490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:51.793662  758490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:20:51.793757  758490 kubeadm.go:400] StartCluster: {Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:51.793861  758490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:51.793968  758490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:51.834197  758490 cri.go:89] found id: ""
	I1014 21:20:51.834284  758490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:51.847308  758490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:20:51.855609  758490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:20:51.855681  758490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:20:51.870905  758490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:20:51.870927  758490 kubeadm.go:157] found existing configuration files:
	
	I1014 21:20:51.870991  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:20:51.888450  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:20:51.888566  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:20:51.897983  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:20:51.907896  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:20:51.907972  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:20:51.921484  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:20:51.934128  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:20:51.934198  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:20:51.942755  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:20:51.957002  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:20:51.957078  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:20:51.973628  758490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:20:52.059646  758490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:20:52.060017  758490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:20:52.129358  758490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:20:52.131185  758490 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:20:52.131249  758490 kubeadm.go:318] OS: Linux
	I1014 21:20:52.131309  758490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:20:52.131379  758490 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:20:52.131448  758490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:20:52.131518  758490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:20:52.131583  758490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:20:52.131647  758490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:20:52.131710  758490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:20:52.131773  758490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:20:52.131836  758490 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:20:52.260885  758490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:20:52.261011  758490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:20:52.261134  758490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:20:52.276425  758490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:20:52.281674  758490 out.go:252]   - Generating certificates and keys ...
	I1014 21:20:52.281804  758490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:20:52.281886  758490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:20:52.596382  758490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:20:53.209049  758490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:20:53.443833  758490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:20:53.848719  758490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:20:54.147119  758490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:20:54.147479  758490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-810782 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:20:52.013252  757567 addons.go:514] duration metric: took 9.303579028s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1014 21:20:52.029120  757567 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 21:20:52.033462  757567 api_server.go:141] control plane version: v1.34.1
	I1014 21:20:52.033489  757567 api_server.go:131] duration metric: took 29.525217ms to wait for apiserver health ...
	I1014 21:20:52.033498  757567 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:20:52.040157  757567 system_pods.go:59] 8 kube-system pods found
	I1014 21:20:52.040246  757567 system_pods.go:61] "coredns-66bc5c9577-m627n" [70a4ed77-8432-4ad1-8ea5-28e6bb93b469] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:20:52.040272  757567 system_pods.go:61] "etcd-no-preload-970287" [47522eab-d9ff-406f-abbf-ea1f58b462e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:52.040364  757567 system_pods.go:61] "kindnet-tps4w" [bbcd8c52-d245-492d-b083-0b93dc854ffd] Running
	I1014 21:20:52.040404  757567 system_pods.go:61] "kube-apiserver-no-preload-970287" [34789a40-0f03-4732-bbab-ec4774cee5ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:52.040426  757567 system_pods.go:61] "kube-controller-manager-no-preload-970287" [84201c4c-7ac7-4cb5-a2d3-cdb4d100ddf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:52.040454  757567 system_pods.go:61] "kube-proxy-c4vlq" [bd169342-2896-4b4e-b062-de3894754634] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:20:52.040490  757567 system_pods.go:61] "kube-scheduler-no-preload-970287" [2c141840-1640-41f3-8d2b-4fb8c4b15f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:52.040515  757567 system_pods.go:61] "storage-provisioner" [b9d1e209-23f4-4c75-b216-869ed982bec3] Running
	I1014 21:20:52.040535  757567 system_pods.go:74] duration metric: took 7.031051ms to wait for pod list to return data ...
	I1014 21:20:52.040572  757567 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:20:52.052642  757567 default_sa.go:45] found service account: "default"
	I1014 21:20:52.052718  757567 default_sa.go:55] duration metric: took 12.123601ms for default service account to be created ...
	I1014 21:20:52.052742  757567 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:20:52.154584  757567 system_pods.go:86] 8 kube-system pods found
	I1014 21:20:52.154670  757567 system_pods.go:89] "coredns-66bc5c9577-m627n" [70a4ed77-8432-4ad1-8ea5-28e6bb93b469] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:20:52.154695  757567 system_pods.go:89] "etcd-no-preload-970287" [47522eab-d9ff-406f-abbf-ea1f58b462e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:52.154734  757567 system_pods.go:89] "kindnet-tps4w" [bbcd8c52-d245-492d-b083-0b93dc854ffd] Running
	I1014 21:20:52.154762  757567 system_pods.go:89] "kube-apiserver-no-preload-970287" [34789a40-0f03-4732-bbab-ec4774cee5ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:52.154787  757567 system_pods.go:89] "kube-controller-manager-no-preload-970287" [84201c4c-7ac7-4cb5-a2d3-cdb4d100ddf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:52.154822  757567 system_pods.go:89] "kube-proxy-c4vlq" [bd169342-2896-4b4e-b062-de3894754634] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:20:52.154849  757567 system_pods.go:89] "kube-scheduler-no-preload-970287" [2c141840-1640-41f3-8d2b-4fb8c4b15f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:52.154870  757567 system_pods.go:89] "storage-provisioner" [b9d1e209-23f4-4c75-b216-869ed982bec3] Running
	I1014 21:20:52.154908  757567 system_pods.go:126] duration metric: took 102.143556ms to wait for k8s-apps to be running ...
	I1014 21:20:52.154935  757567 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:20:52.155027  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:52.191138  757567 system_svc.go:56] duration metric: took 36.193175ms WaitForService to wait for kubelet
	I1014 21:20:52.191220  757567 kubeadm.go:586] duration metric: took 9.481801277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:20:52.191257  757567 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:20:52.217276  757567 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:20:52.217369  757567 node_conditions.go:123] node cpu capacity is 2
	I1014 21:20:52.217398  757567 node_conditions.go:105] duration metric: took 26.106071ms to run NodePressure ...
	I1014 21:20:52.217438  757567 start.go:241] waiting for startup goroutines ...
	I1014 21:20:52.217463  757567 start.go:246] waiting for cluster config update ...
	I1014 21:20:52.217490  757567 start.go:255] writing updated cluster config ...
	I1014 21:20:52.217852  757567 ssh_runner.go:195] Run: rm -f paused
	I1014 21:20:52.226355  757567 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:20:52.260978  757567 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m627n" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:20:54.280496  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:20:55.185261  758490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:20:55.185672  758490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-810782 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:20:55.446306  758490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:20:55.732688  758490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:20:55.973481  758490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:20:55.973800  758490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:20:56.547457  758490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:20:56.803057  758490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:20:57.413692  758490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:20:57.494152  758490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:20:57.818094  758490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:20:57.819195  758490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:20:57.822164  758490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:20:57.826439  758490 out.go:252]   - Booting up control plane ...
	I1014 21:20:57.826543  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:20:57.826631  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:20:57.827835  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:20:57.850799  758490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:20:57.850918  758490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:20:57.860526  758490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:20:57.860637  758490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:20:57.860684  758490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:20:58.080298  758490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:20:58.085550  758490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:20:59.086936  758490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001697105s
	I1014 21:20:59.097036  758490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:20:59.100785  758490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:20:59.100894  758490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:20:59.100976  758490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1014 21:20:56.767804  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:20:59.267478  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:01.275706  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:02.258447  758490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.160022341s
	W1014 21:21:03.771026  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:05.793921  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:06.549884  758490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.452179513s
	I1014 21:21:07.600788  758490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.502881262s
	I1014 21:21:07.661586  758490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:21:07.702526  758490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:21:07.752185  758490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:21:07.753040  758490 kubeadm.go:318] [mark-control-plane] Marking the node auto-810782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:21:07.782774  758490 kubeadm.go:318] [bootstrap-token] Using token: owkwev.s128a74ctr4eiv72
	I1014 21:21:07.786453  758490 out.go:252]   - Configuring RBAC rules ...
	I1014 21:21:07.786576  758490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:21:07.795523  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:21:07.806593  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:21:07.813316  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:21:07.827682  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:21:07.838258  758490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:21:08.030107  758490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:21:08.459835  758490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:21:09.019343  758490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:21:09.021563  758490 kubeadm.go:318] 
	I1014 21:21:09.021645  758490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:21:09.021653  758490 kubeadm.go:318] 
	I1014 21:21:09.021734  758490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:21:09.021742  758490 kubeadm.go:318] 
	I1014 21:21:09.021769  758490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:21:09.021843  758490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:21:09.021901  758490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:21:09.021911  758490 kubeadm.go:318] 
	I1014 21:21:09.021974  758490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:21:09.021982  758490 kubeadm.go:318] 
	I1014 21:21:09.022043  758490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:21:09.022053  758490 kubeadm.go:318] 
	I1014 21:21:09.022112  758490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:21:09.022200  758490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:21:09.022277  758490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:21:09.022286  758490 kubeadm.go:318] 
	I1014 21:21:09.022374  758490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:21:09.022458  758490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:21:09.022465  758490 kubeadm.go:318] 
	I1014 21:21:09.022552  758490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token owkwev.s128a74ctr4eiv72 \
	I1014 21:21:09.022666  758490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:21:09.022691  758490 kubeadm.go:318] 	--control-plane 
	I1014 21:21:09.022700  758490 kubeadm.go:318] 
	I1014 21:21:09.022797  758490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:21:09.022806  758490 kubeadm.go:318] 
	I1014 21:21:09.022891  758490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token owkwev.s128a74ctr4eiv72 \
	I1014 21:21:09.023003  758490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:21:09.027249  758490 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:21:09.027493  758490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:21:09.027610  758490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:21:09.027633  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:21:09.027641  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:21:09.030747  758490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:21:09.033564  758490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:21:09.040717  758490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:21:09.040743  758490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:21:09.063489  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1014 21:21:08.272375  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:10.273533  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:09.791321  758490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:21:09.791454  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:09.791530  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-810782 minikube.k8s.io/updated_at=2025_10_14T21_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=auto-810782 minikube.k8s.io/primary=true
	I1014 21:21:10.020226  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:10.020295  758490 ops.go:34] apiserver oom_adj: -16
	I1014 21:21:10.520878  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:11.020466  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:11.520714  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:12.020834  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:12.520332  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.021127  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.521113  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.755243  758490 kubeadm.go:1113] duration metric: took 3.963834493s to wait for elevateKubeSystemPrivileges
	I1014 21:21:13.755270  758490 kubeadm.go:402] duration metric: took 21.961518228s to StartCluster
	I1014 21:21:13.755291  758490 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:21:13.755351  758490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:21:13.756326  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:21:13.757147  758490 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:21:13.757280  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:21:13.757517  758490 config.go:182] Loaded profile config "auto-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:21:13.757548  758490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:21:13.757642  758490 addons.go:69] Setting storage-provisioner=true in profile "auto-810782"
	I1014 21:21:13.757656  758490 addons.go:238] Setting addon storage-provisioner=true in "auto-810782"
	I1014 21:21:13.757664  758490 addons.go:69] Setting default-storageclass=true in profile "auto-810782"
	I1014 21:21:13.757681  758490 host.go:66] Checking if "auto-810782" exists ...
	I1014 21:21:13.757681  758490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-810782"
	I1014 21:21:13.758110  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.758163  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.768592  758490 out.go:179] * Verifying Kubernetes components...
	I1014 21:21:13.774711  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:21:13.817239  758490 addons.go:238] Setting addon default-storageclass=true in "auto-810782"
	I1014 21:21:13.817280  758490 host.go:66] Checking if "auto-810782" exists ...
	I1014 21:21:13.817701  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.829669  758490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:21:13.837882  758490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:21:13.838061  758490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:21:13.838148  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:21:13.864932  758490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:21:13.864952  758490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:21:13.865017  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:21:13.903366  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:21:13.910342  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:21:14.197547  758490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:21:14.206316  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:21:14.255642  758490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:21:14.264006  758490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:21:14.980189  758490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1014 21:21:14.982362  758490 node_ready.go:35] waiting up to 15m0s for node "auto-810782" to be "Ready" ...
	I1014 21:21:15.041364  758490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1014 21:21:12.766642  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:14.766945  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:15.045128  758490 addons.go:514] duration metric: took 1.287547684s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 21:21:15.484357  758490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-810782" context rescaled to 1 replicas
	W1014 21:21:16.985424  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:16.767139  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:19.266772  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:21.267814  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:19.485103  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:21.485707  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:23.986129  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:23.766734  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:26.266643  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:26.485725  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:28.486133  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:28.766884  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:30.767357  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:31.267811  757567 pod_ready.go:94] pod "coredns-66bc5c9577-m627n" is "Ready"
	I1014 21:21:31.267842  757567 pod_ready.go:86] duration metric: took 39.006827062s for pod "coredns-66bc5c9577-m627n" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.270713  757567 pod_ready.go:83] waiting for pod "etcd-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.275318  757567 pod_ready.go:94] pod "etcd-no-preload-970287" is "Ready"
	I1014 21:21:31.275346  757567 pod_ready.go:86] duration metric: took 4.604629ms for pod "etcd-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.277773  757567 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.282393  757567 pod_ready.go:94] pod "kube-apiserver-no-preload-970287" is "Ready"
	I1014 21:21:31.282421  757567 pod_ready.go:86] duration metric: took 4.621319ms for pod "kube-apiserver-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.284942  757567 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.465354  757567 pod_ready.go:94] pod "kube-controller-manager-no-preload-970287" is "Ready"
	I1014 21:21:31.465385  757567 pod_ready.go:86] duration metric: took 180.41666ms for pod "kube-controller-manager-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.666355  757567 pod_ready.go:83] waiting for pod "kube-proxy-c4vlq" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.065701  757567 pod_ready.go:94] pod "kube-proxy-c4vlq" is "Ready"
	I1014 21:21:32.065780  757567 pod_ready.go:86] duration metric: took 399.393414ms for pod "kube-proxy-c4vlq" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.264966  757567 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.671562  757567 pod_ready.go:94] pod "kube-scheduler-no-preload-970287" is "Ready"
	I1014 21:21:32.671641  757567 pod_ready.go:86] duration metric: took 406.648235ms for pod "kube-scheduler-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.671669  757567 pod_ready.go:40] duration metric: took 40.445226196s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:21:32.740100  757567 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:21:32.743641  757567 out.go:179] * Done! kubectl is now configured to use "no-preload-970287" cluster and "default" namespace by default
	W1014 21:21:30.985705  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:32.985895  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:35.485395  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:37.485908  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:39.985668  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:42.485015  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.782253436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.78942861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.79011847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.804633607Z" level=info msg="Created container 1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper" id=4dffee9d-cf07-4a1d-be89-87e907191f9b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.805300115Z" level=info msg="Starting container: 1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09" id=c75934d4-3fba-471b-a036-410d5fe0430a name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:21:25 no-preload-970287 conmon[1640]: conmon 1c0bac6eb96e48334975 <ninfo>: container 1642 exited with status 1
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.810048443Z" level=info msg="Started container" PID=1642 containerID=1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper id=c75934d4-3fba-471b-a036-410d5fe0430a name=/runtime.v1.RuntimeService/StartContainer sandboxID=edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.306036229Z" level=info msg="Removing container: cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.313880554Z" level=info msg="Error loading conmon cgroup of container cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39: cgroup deleted" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.31956303Z" level=info msg="Removed container cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.05745126Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066118633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066169226Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066209638Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069775862Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069837468Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069861337Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073408657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073451135Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073475948Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077029693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077068036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077094129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.080893991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.080935502Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1c0bac6eb96e4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   edfc02a18831b       dashboard-metrics-scraper-6ffb444bf9-5wgx4   kubernetes-dashboard
	4bcc6a658ca4f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           25 seconds ago       Running             storage-provisioner         2                   9581dc6f994aa       storage-provisioner                          kube-system
	b7e2a8d9fee3d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   948afd45059b3       kubernetes-dashboard-855c9754f9-f4mgk        kubernetes-dashboard
	86e64bf2703ce       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   6d2c1ff6b54af       busybox                                      default
	9d950fd859ff9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   83f06b15ec3c9       kube-proxy-c4vlq                             kube-system
	b026968d5f83c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   4299616e037b7       coredns-66bc5c9577-m627n                     kube-system
	e9a7ae186819a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   2fc2e5f5545b8       kindnet-tps4w                                kube-system
	eab921e666565       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           57 seconds ago       Exited              storage-provisioner         1                   9581dc6f994aa       storage-provisioner                          kube-system
	cb346dcfd25d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   528e66b06c719       kube-apiserver-no-preload-970287             kube-system
	882dabee1ff3b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   89c70792134b3       etcd-no-preload-970287                       kube-system
	27cb84754cd92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ec537d951e82d       kube-controller-manager-no-preload-970287    kube-system
	ecf2261fe6065       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   558dc46f76463       kube-scheduler-no-preload-970287             kube-system
	
	
	==> coredns [b026968d5f83c6d9b149d2641a2ce45e5705822b62c6b16cfc79a450a3e1b43b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40626 - 51701 "HINFO IN 5531993140901036532.4936131409708613596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020113941s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-970287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-970287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=no-preload-970287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-970287
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:21:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-970287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 786ed5ad164b4a7aae3c20ff7b233e05
	  System UUID:                739e9d19-3ca7-40c2-b50a-6b77f8bfdd3e
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-m627n                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-970287                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-tps4w                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m1s
	  kube-system                 kube-apiserver-no-preload-970287              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-no-preload-970287     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-c4vlq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-no-preload-970287              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5wgx4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f4mgk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m5s                   kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s                   kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m5s                   kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m2s                   node-controller  Node no-preload-970287 event: Registered Node no-preload-970287 in Controller
	  Normal   NodeReady                104s                   kubelet          Node no-preload-970287 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node no-preload-970287 event: Registered Node no-preload-970287 in Controller
	
	
	==> dmesg <==
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	[Oct14 21:20] overlayfs: idmapped layers are currently not supported
	[ +26.799189] overlayfs: idmapped layers are currently not supported
	[ +17.106973] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7] <==
	{"level":"warn","ts":"2025-10-14T21:20:45.764130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.808656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.841251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.908739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.961119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.997416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.026779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.099191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.122843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.134486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.169534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.212645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.231283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.248456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.277718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.320500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.360599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.405121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.456903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.497056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.517302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.627259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.662203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.714039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.865058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:21:47 up  4:04,  0 user,  load average: 4.14, 3.62, 2.52
	Linux no-preload-970287 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9a7ae186819ae864bbab79c2d91361c9ece7809280dd662a139a19b9dad3324] <==
	I1014 21:20:50.604607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:20:50.604853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:20:50.604963       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:20:50.604974       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:20:50.604986       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:20:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:20:51.053803       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:20:51.053878       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:20:51.053913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:20:51.054301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:21:21.054428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:21:21.054442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:21:21.061229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:21:21.062244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1014 21:21:22.554488       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:21:22.554539       1 metrics.go:72] Registering metrics
	I1014 21:21:22.554599       1 controller.go:711] "Syncing nftables rules"
	I1014 21:21:31.056450       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:21:31.056507       1 main.go:301] handling current node
	I1014 21:21:41.058198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:21:41.058231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400] <==
	I1014 21:20:48.624627       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:20:48.632292       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:20:48.659652       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:20:48.673236       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:20:48.673809       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:20:48.673821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:20:48.673906       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:20:48.674092       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:20:48.684539       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1014 21:20:48.689718       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:20:48.697576       1 cache.go:39] Caches are synced for autoregister controller
	I1014 21:20:48.697712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:20:48.697747       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:20:48.709726       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:20:48.790857       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:20:49.185164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:20:51.067195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:20:51.558658       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:20:51.697802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:20:51.725202       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:20:51.966584       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.28.94"}
	I1014 21:20:51.995361       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.239.172"}
	I1014 21:20:53.790766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:20:53.842512       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:20:54.141326       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc] <==
	I1014 21:20:53.714856       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:20:53.716053       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:20:53.723930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1014 21:20:53.724100       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:20:53.725164       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1014 21:20:53.727441       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1014 21:20:53.729950       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1014 21:20:53.732667       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:20:53.733810       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:20:53.734977       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:20:53.736133       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 21:20:53.736211       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:20:53.736248       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:20:53.741374       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:20:53.741463       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:20:53.746028       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:53.747308       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:20:53.747416       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:20:53.747509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-970287"
	I1014 21:20:53.747574       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 21:20:53.748919       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:20:53.750402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:20:53.753889       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:20:53.760355       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:53.766813       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [9d950fd859ff930a00c330f302d797f5be27bec2d4c5ef9580c021fce5851d5d] <==
	I1014 21:20:52.150214       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:20:52.506909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:20:52.618092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:20:52.618134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:20:52.618224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:20:52.643384       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:20:52.643493       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:20:52.647137       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:20:52.647497       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:20:52.647751       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:52.649198       1 config.go:200] "Starting service config controller"
	I1014 21:20:52.649296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:20:52.649349       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:20:52.649378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:20:52.649414       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:20:52.649442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:20:52.650102       1 config.go:309] "Starting node config controller"
	I1014 21:20:52.652523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:20:52.652583       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:20:52.750076       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:20:52.750170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:20:52.750197       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a] <==
	I1014 21:20:48.382239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:48.405838       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:20:48.406061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:48.406110       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:48.406162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1014 21:20:48.440807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:20:48.440894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:20:48.440943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:20:48.495128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:20:48.511338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:20:48.511424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:20:48.511476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:20:48.511548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:20:48.511606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:20:48.511660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:20:48.511719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:20:48.511772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:20:48.511818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:20:48.511865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:20:48.511913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:20:48.511961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 21:20:48.512010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:20:48.512123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:20:48.512175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1014 21:20:49.912784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.358201     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/545dbf55-bce4-4aed-a463-550b5136ae40-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f4mgk\" (UID: \"545dbf55-bce4-4aed-a463-550b5136ae40\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.358787     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgf2\" (UniqueName: \"kubernetes.io/projected/545dbf55-bce4-4aed-a463-550b5136ae40-kube-api-access-mbgf2\") pod \"kubernetes-dashboard-855c9754f9-f4mgk\" (UID: \"545dbf55-bce4-4aed-a463-550b5136ae40\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.459214     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6d88fcf-ee45-4570-b2da-1c3795d0356a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-5wgx4\" (UID: \"e6d88fcf-ee45-4570-b2da-1c3795d0356a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.459408     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wh8w\" (UniqueName: \"kubernetes.io/projected/e6d88fcf-ee45-4570-b2da-1c3795d0356a-kube-api-access-7wh8w\") pod \"dashboard-metrics-scraper-6ffb444bf9-5wgx4\" (UID: \"e6d88fcf-ee45-4570-b2da-1c3795d0356a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: W1014 21:20:54.673084     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958 WatchSource:0}: Error finding container 948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958: Status 404 returned error can't find the container with id 948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: W1014 21:20:54.694090     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68 WatchSource:0}: Error finding container edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68: Status 404 returned error can't find the container with id edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68
	Oct 14 21:21:09 no-preload-970287 kubelet[772]: I1014 21:21:09.255956     772 scope.go:117] "RemoveContainer" containerID="086e4dc527389209e7131154cc3e883964f2a74b62f02262310b311cbd38636e"
	Oct 14 21:21:09 no-preload-970287 kubelet[772]: I1014 21:21:09.289256     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk" podStartSLOduration=8.133468338 podStartE2EDuration="15.289239359s" podCreationTimestamp="2025-10-14 21:20:54 +0000 UTC" firstStartedPulling="2025-10-14 21:20:54.677032033 +0000 UTC m=+13.439887693" lastFinishedPulling="2025-10-14 21:21:01.832803054 +0000 UTC m=+20.595658714" observedRunningTime="2025-10-14 21:21:02.264254717 +0000 UTC m=+21.027110369" watchObservedRunningTime="2025-10-14 21:21:09.289239359 +0000 UTC m=+28.052095011"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: I1014 21:21:10.260266     772 scope.go:117] "RemoveContainer" containerID="086e4dc527389209e7131154cc3e883964f2a74b62f02262310b311cbd38636e"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: I1014 21:21:10.260590     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: E1014 21:21:10.260740     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:11 no-preload-970287 kubelet[772]: I1014 21:21:11.265174     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:11 no-preload-970287 kubelet[772]: E1014 21:21:11.265394     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:14 no-preload-970287 kubelet[772]: I1014 21:21:14.640802     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:14 no-preload-970287 kubelet[772]: E1014 21:21:14.640984     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:22 no-preload-970287 kubelet[772]: I1014 21:21:22.290623     772 scope.go:117] "RemoveContainer" containerID="eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416"
	Oct 14 21:21:25 no-preload-970287 kubelet[772]: I1014 21:21:25.778249     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: I1014 21:21:26.303923     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: I1014 21:21:26.304234     772 scope.go:117] "RemoveContainer" containerID="1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: E1014 21:21:26.304420     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:34 no-preload-970287 kubelet[772]: I1014 21:21:34.640953     772 scope.go:117] "RemoveContainer" containerID="1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	Oct 14 21:21:34 no-preload-970287 kubelet[772]: E1014 21:21:34.641607     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:44 no-preload-970287 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:21:45 no-preload-970287 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:21:45 no-preload-970287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7e2a8d9fee3df01c06c63bcdfedfa290e5f10824bd890a50ba040a29633a6ff] <==
	2025/10/14 21:21:01 Using namespace: kubernetes-dashboard
	2025/10/14 21:21:01 Using in-cluster config to connect to apiserver
	2025/10/14 21:21:01 Using secret token for csrf signing
	2025/10/14 21:21:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:21:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:21:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:21:01 Generating JWE encryption key
	2025/10/14 21:21:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:21:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:21:02 Initializing JWE encryption key from synchronized object
	2025/10/14 21:21:02 Creating in-cluster Sidecar client
	2025/10/14 21:21:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:21:02 Serving insecurely on HTTP port: 9090
	2025/10/14 21:21:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:21:01 Starting overwatch
	
	
	==> storage-provisioner [4bcc6a658ca4f326964a68891e92ff5132a7a58c21942626f11381fb247fb86f] <==
	I1014 21:21:22.337790       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 21:21:22.354154       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:21:22.354237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:21:22.357076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:25.815671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:30.077466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:33.676061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:36.729611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.751697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.756288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:21:39.756461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:21:39.756652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616!
	I1014 21:21:39.757831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff2602c-0892-49a1-af19-280368cb1cc0", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616 became leader
	W1014 21:21:39.765185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.768759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:21:39.857085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616!
	W1014 21:21:41.771916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:41.777196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:43.781296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:43.785887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:45.789066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:45.795671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:47.799692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:47.806230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416] <==
	I1014 21:20:51.349630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:21:21.351455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970287 -n no-preload-970287: exit status 2 (390.16738ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-970287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-970287
helpers_test.go:243: (dbg) docker inspect no-preload-970287:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	        "Created": "2025-10-14T21:18:55.141228098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T21:20:32.128940448Z",
	            "FinishedAt": "2025-10-14T21:20:31.021404108Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/hosts",
	        "LogPath": "/var/lib/docker/containers/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497-json.log",
	        "Name": "/no-preload-970287",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970287:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970287",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497",
	                "LowerDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a-init/diff:/var/lib/docker/overlay2/77c6f05db058c9bd4936a3700d79272d9cce62b811df81cc1969e750eb864997/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ca0b0ff57c15a632ab5edc016a9da93666239e68b4aebbd8ee7403dacc3a61a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-970287",
	                "Source": "/var/lib/docker/volumes/no-preload-970287/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970287",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970287",
	                "name.minikube.sigs.k8s.io": "no-preload-970287",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adc6ea98da52cd00f8303436c078b72f20afbb5c95934de5c28feae8da074fc2",
	            "SandboxKey": "/var/run/docker/netns/adc6ea98da52",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970287": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:01:75:59:74:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "006eb97cac5b79644e6bd32a155d4cd1ef5c55fddbcd2356a8ffe605bdf15f5f",
	                    "EndpointID": "91ff71b275316083cb9c427ace956a1d4d50e01fcc0d3bf73caa797921a70544",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970287",
	                        "9ce7c6b3a1e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287: exit status 2 (362.094907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970287 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-970287 logs -n 25: (1.296157408s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-319221 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-319221                                                                                                                                                                                                               │ default-k8s-diff-port-319221 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ delete  │ -p disable-driver-mounts-457849                                                                                                                                                                                                               │ disable-driver-mounts-457849 │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:18 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ embed-certs-650951 image list --format=json                                                                                                                                                                                                   │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ pause   │ -p embed-certs-650951 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │                     │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ delete  │ -p embed-certs-650951                                                                                                                                                                                                                         │ embed-certs-650951           │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:19 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:19 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p newest-cni-718270 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-970287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ stop    │ -p no-preload-970287 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ image   │ newest-cni-718270 image list --format=json                                                                                                                                                                                                    │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ pause   │ -p newest-cni-718270 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-970287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:21 UTC │
	│ delete  │ -p newest-cni-718270                                                                                                                                                                                                                          │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ delete  │ -p newest-cni-718270                                                                                                                                                                                                                          │ newest-cni-718270            │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │ 14 Oct 25 21:20 UTC │
	│ start   │ -p auto-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-810782                  │ jenkins │ v1.37.0 │ 14 Oct 25 21:20 UTC │                     │
	│ image   │ no-preload-970287 image list --format=json                                                                                                                                                                                                    │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:21 UTC │ 14 Oct 25 21:21 UTC │
	│ pause   │ -p no-preload-970287 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-970287            │ jenkins │ v1.37.0 │ 14 Oct 25 21:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 21:20:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 21:20:34.223645  758490 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:20:34.223755  758490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:34.223765  758490 out.go:374] Setting ErrFile to fd 2...
	I1014 21:20:34.223770  758490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:20:34.224329  758490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:20:34.224803  758490 out.go:368] Setting JSON to false
	I1014 21:20:34.225755  758490 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14585,"bootTime":1760462250,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:20:34.225843  758490 start.go:141] virtualization:  
	I1014 21:20:34.229757  758490 out.go:179] * [auto-810782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:20:34.232977  758490 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:20:34.233094  758490 notify.go:220] Checking for updates...
	I1014 21:20:34.239750  758490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:20:34.243026  758490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:34.246503  758490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:20:34.249457  758490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:20:34.252579  758490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:20:34.256076  758490 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:34.256212  758490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:20:34.279675  758490 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:20:34.279809  758490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:34.338222  758490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:20:34.328662707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:34.338332  758490 docker.go:318] overlay module found
	I1014 21:20:34.341561  758490 out.go:179] * Using the docker driver based on user configuration
	I1014 21:20:34.344466  758490 start.go:305] selected driver: docker
	I1014 21:20:34.344485  758490 start.go:925] validating driver "docker" against <nil>
	I1014 21:20:34.344515  758490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:20:34.345237  758490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:20:34.400129  758490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-14 21:20:34.390856887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:20:34.400283  758490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 21:20:34.400589  758490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:20:34.403585  758490 out.go:179] * Using Docker driver with root privileges
	I1014 21:20:34.406351  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:20:34.406417  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:34.406433  758490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 21:20:34.406507  758490 start.go:349] cluster config:
	{Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1014 21:20:34.409571  758490 out.go:179] * Starting "auto-810782" primary control-plane node in "auto-810782" cluster
	I1014 21:20:34.412350  758490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 21:20:34.415196  758490 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 21:20:34.418004  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:34.418050  758490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 21:20:34.418061  758490 cache.go:58] Caching tarball of preloaded images
	I1014 21:20:34.418110  758490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 21:20:34.418160  758490 preload.go:233] Found /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1014 21:20:34.418170  758490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 21:20:34.418271  758490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json ...
	I1014 21:20:34.418288  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json: {Name:mkf5766e2d9b5ae306968b16baf59991fd0a7831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:34.436559  758490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 21:20:34.436582  758490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 21:20:34.436603  758490 cache.go:232] Successfully downloaded all kic artifacts
	I1014 21:20:34.436624  758490 start.go:360] acquireMachinesLock for auto-810782: {Name:mk46b2543949e29c5e430185a89cc9cdf7ed96d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 21:20:34.436728  758490 start.go:364] duration metric: took 83.227µs to acquireMachinesLock for "auto-810782"
	I1014 21:20:34.436759  758490 start.go:93] Provisioning new machine with config: &{Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:34.436831  758490 start.go:125] createHost starting for "" (driver="docker")
	I1014 21:20:32.087720  757567 out.go:252] * Restarting existing docker container for "no-preload-970287" ...
	I1014 21:20:32.087808  757567 cli_runner.go:164] Run: docker start no-preload-970287
	I1014 21:20:32.397796  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:32.426176  757567 kic.go:430] container "no-preload-970287" state is running.
	I1014 21:20:32.426537  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:32.450603  757567 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/config.json ...
	I1014 21:20:32.450838  757567 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:32.450906  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:32.473940  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:32.474265  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:32.474284  757567 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:32.474951  757567 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1014 21:20:35.628058  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:20:35.628100  757567 ubuntu.go:182] provisioning hostname "no-preload-970287"
	I1014 21:20:35.628214  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:35.647554  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:35.647884  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:35.647902  757567 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-970287 && echo "no-preload-970287" | sudo tee /etc/hostname
	I1014 21:20:35.810491  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-970287
	
	I1014 21:20:35.810572  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:35.832669  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:35.833003  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:35.833028  757567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970287/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:35.984869  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:35.984944  757567 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:35.985007  757567 ubuntu.go:190] setting up certificates
	I1014 21:20:35.985034  757567 provision.go:84] configureAuth start
	I1014 21:20:35.985127  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:36.008725  757567 provision.go:143] copyHostCerts
	I1014 21:20:36.008797  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:36.008807  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:36.008892  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:36.008995  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:36.009007  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:36.009037  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:36.009089  757567 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:36.009093  757567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:36.009117  757567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:36.009163  757567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.no-preload-970287 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970287]
	I1014 21:20:36.356328  757567 provision.go:177] copyRemoteCerts
	I1014 21:20:36.356437  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:36.356506  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.374623  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:36.476891  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:36.496564  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 21:20:36.521182  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:36.546095  757567 provision.go:87] duration metric: took 561.023061ms to configureAuth
	I1014 21:20:36.546121  757567 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:36.546307  757567 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:36.546415  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.568664  757567 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:36.568980  757567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1014 21:20:36.568999  757567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:34.440107  758490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 21:20:34.440365  758490 start.go:159] libmachine.API.Create for "auto-810782" (driver="docker")
	I1014 21:20:34.440416  758490 client.go:168] LocalClient.Create starting
	I1014 21:20:34.440491  758490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem
	I1014 21:20:34.440527  758490 main.go:141] libmachine: Decoding PEM data...
	I1014 21:20:34.440544  758490 main.go:141] libmachine: Parsing certificate...
	I1014 21:20:34.440607  758490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem
	I1014 21:20:34.440631  758490 main.go:141] libmachine: Decoding PEM data...
	I1014 21:20:34.440645  758490 main.go:141] libmachine: Parsing certificate...
	I1014 21:20:34.441001  758490 cli_runner.go:164] Run: docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 21:20:34.456556  758490 cli_runner.go:211] docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 21:20:34.456643  758490 network_create.go:284] running [docker network inspect auto-810782] to gather additional debugging logs...
	I1014 21:20:34.456664  758490 cli_runner.go:164] Run: docker network inspect auto-810782
	W1014 21:20:34.472640  758490 cli_runner.go:211] docker network inspect auto-810782 returned with exit code 1
	I1014 21:20:34.472670  758490 network_create.go:287] error running [docker network inspect auto-810782]: docker network inspect auto-810782: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-810782 not found
	I1014 21:20:34.472684  758490 network_create.go:289] output of [docker network inspect auto-810782]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-810782 not found
	
	** /stderr **
	I1014 21:20:34.472785  758490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:34.490489  758490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
	I1014 21:20:34.490817  758490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-92eecc9d5dcd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:39:04:de:cc:9d} reservation:<nil>}
	I1014 21:20:34.491181  758490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6db3c426d67c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:2a:f4:c0:70:fe} reservation:<nil>}
	I1014 21:20:34.491446  758490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-006eb97cac5b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:d4:13:6b:f6:4c} reservation:<nil>}
	I1014 21:20:34.491878  758490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a547e0}
	I1014 21:20:34.491902  758490 network_create.go:124] attempt to create docker network auto-810782 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1014 21:20:34.491956  758490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-810782 auto-810782
	I1014 21:20:34.542459  758490 network_create.go:108] docker network auto-810782 192.168.85.0/24 created
	I1014 21:20:34.542497  758490 kic.go:121] calculated static IP "192.168.85.2" for the "auto-810782" container
	I1014 21:20:34.542567  758490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 21:20:34.558539  758490 cli_runner.go:164] Run: docker volume create auto-810782 --label name.minikube.sigs.k8s.io=auto-810782 --label created_by.minikube.sigs.k8s.io=true
	I1014 21:20:34.576270  758490 oci.go:103] Successfully created a docker volume auto-810782
	I1014 21:20:34.576387  758490 cli_runner.go:164] Run: docker run --rm --name auto-810782-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810782 --entrypoint /usr/bin/test -v auto-810782:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 21:20:35.081442  758490 oci.go:107] Successfully prepared a docker volume auto-810782
	I1014 21:20:35.081513  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:35.081536  758490 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 21:20:35.081622  758490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810782:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 21:20:36.941124  757567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:36.941197  757567 machine.go:96] duration metric: took 4.490331779s to provisionDockerMachine
	I1014 21:20:36.941223  757567 start.go:293] postStartSetup for "no-preload-970287" (driver="docker")
	I1014 21:20:36.941265  757567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:36.941364  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:36.941439  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:36.969205  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.077483  757567 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:37.081669  757567 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:37.081700  757567 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:37.081712  757567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:37.081784  757567 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:37.081877  757567 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:37.081977  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:37.091262  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:37.112538  757567 start.go:296] duration metric: took 171.284368ms for postStartSetup
	I1014 21:20:37.112662  757567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:37.112750  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.130825  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.233268  757567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:37.237861  757567 fix.go:56] duration metric: took 5.175904148s for fixHost
	I1014 21:20:37.237903  757567 start.go:83] releasing machines lock for "no-preload-970287", held for 5.175955309s
	I1014 21:20:37.237986  757567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970287
	I1014 21:20:37.255571  757567 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:37.255636  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.255956  757567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:37.256038  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:37.281422  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.292534  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:37.392282  757567 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:37.496794  757567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:37.544708  757567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:37.549363  757567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:37.549435  757567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:37.557247  757567 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 21:20:37.557272  757567 start.go:495] detecting cgroup driver to use...
	I1014 21:20:37.557305  757567 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:37.557353  757567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:37.572789  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:37.586834  757567 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:37.586896  757567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:37.603186  757567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:37.617411  757567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:37.754960  757567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:37.903262  757567 docker.go:234] disabling docker service ...
	I1014 21:20:37.903324  757567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:37.921061  757567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:37.934760  757567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:38.114853  757567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:38.262997  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:38.280254  757567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:38.296047  757567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:38.296110  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.305436  757567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:38.305503  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.314596  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.324937  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.334028  757567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:38.342408  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.352696  757567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.361192  757567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:38.370085  757567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:38.378393  757567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:38.386996  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:38.554906  757567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:40.467729  757567 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.912792426s)
	I1014 21:20:40.467759  757567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:40.467809  757567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:40.475298  757567 start.go:563] Will wait 60s for crictl version
	I1014 21:20:40.475382  757567 ssh_runner.go:195] Run: which crictl
	I1014 21:20:40.479444  757567 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:40.527163  757567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:40.527256  757567 ssh_runner.go:195] Run: crio --version
	I1014 21:20:40.564160  757567 ssh_runner.go:195] Run: crio --version
	I1014 21:20:40.621239  757567 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:40.624274  757567 cli_runner.go:164] Run: docker network inspect no-preload-970287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:40.650222  757567 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:40.654298  757567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:40.665597  757567 kubeadm.go:883] updating cluster {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:40.665711  757567 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:40.665753  757567 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:40.712243  757567 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:40.712280  757567 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:40.712288  757567 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:40.712399  757567 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-970287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:40.712497  757567 ssh_runner.go:195] Run: crio config
	I1014 21:20:40.800099  757567 cni.go:84] Creating CNI manager for ""
	I1014 21:20:40.800122  757567 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:40.800144  757567 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:20:40.800167  757567 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970287 NodeName:no-preload-970287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:40.800339  757567 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-970287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:40.800406  757567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:40.817484  757567 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:40.817557  757567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:40.827162  757567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 21:20:40.848965  757567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:40.880534  757567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1014 21:20:40.905344  757567 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:40.911779  757567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:40.940252  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:41.213519  757567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:41.246871  757567 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287 for IP: 192.168.76.2
	I1014 21:20:41.246893  757567 certs.go:195] generating shared ca certs ...
	I1014 21:20:41.246908  757567 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:41.247038  757567 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:41.247092  757567 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:41.247103  757567 certs.go:257] generating profile certs ...
	I1014 21:20:41.247191  757567 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.key
	I1014 21:20:41.247282  757567 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key.5c06cc4f
	I1014 21:20:41.247329  757567 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key
	I1014 21:20:41.247450  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:41.247484  757567 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:41.247497  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:41.247523  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:41.247568  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:41.247596  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:41.247643  757567 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:41.248238  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:41.290128  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:41.323294  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:41.419092  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:41.509972  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 21:20:41.564114  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:20:41.583289  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:41.618451  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 21:20:41.665657  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:41.734908  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:41.797747  757567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:41.822117  757567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:41.836291  757567 ssh_runner.go:195] Run: openssl version
	I1014 21:20:41.846515  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:41.857373  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.864781  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.864863  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:41.940535  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:41.949218  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:41.959049  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:41.964210  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:41.964374  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:42.017940  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:42.030007  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:42.041700  757567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.047190  757567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.047273  757567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:42.096725  757567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:42.107078  757567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:42.114380  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 21:20:42.163389  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 21:20:42.240837  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 21:20:42.309169  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 21:20:42.382780  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 21:20:42.476041  757567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 21:20:42.582660  757567 kubeadm.go:400] StartCluster: {Name:no-preload-970287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-970287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:42.582802  757567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:42.582914  757567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:42.640217  757567 cri.go:89] found id: "cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400"
	I1014 21:20:42.640291  757567 cri.go:89] found id: "882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7"
	I1014 21:20:42.640332  757567 cri.go:89] found id: "27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc"
	I1014 21:20:42.640361  757567 cri.go:89] found id: "ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a"
	I1014 21:20:42.640378  757567 cri.go:89] found id: ""
	I1014 21:20:42.640461  757567 ssh_runner.go:195] Run: sudo runc list -f json
	W1014 21:20:42.662774  757567 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T21:20:42Z" level=error msg="open /run/runc: no such file or directory"
	I1014 21:20:42.662933  757567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:42.678175  757567 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 21:20:42.678208  757567 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 21:20:42.678262  757567 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 21:20:42.690181  757567 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 21:20:42.690638  757567 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-970287" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:42.690791  757567 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-545047/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-970287" cluster setting kubeconfig missing "no-preload-970287" context setting]
	I1014 21:20:42.691117  757567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.692756  757567 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 21:20:42.708506  757567 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1014 21:20:42.708536  757567 kubeadm.go:601] duration metric: took 30.321757ms to restartPrimaryControlPlane
	I1014 21:20:42.708544  757567 kubeadm.go:402] duration metric: took 125.895629ms to StartCluster
	I1014 21:20:42.708559  757567 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.708618  757567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:20:42.709207  757567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:42.709389  757567 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:20:42.709656  757567 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:20:42.709726  757567 addons.go:69] Setting storage-provisioner=true in profile "no-preload-970287"
	I1014 21:20:42.709740  757567 addons.go:238] Setting addon storage-provisioner=true in "no-preload-970287"
	W1014 21:20:42.709745  757567 addons.go:247] addon storage-provisioner should already be in state true
	I1014 21:20:42.709769  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.710207  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.711359  757567 addons.go:69] Setting dashboard=true in profile "no-preload-970287"
	I1014 21:20:42.711411  757567 addons.go:238] Setting addon dashboard=true in "no-preload-970287"
	W1014 21:20:42.711432  757567 addons.go:247] addon dashboard should already be in state true
	I1014 21:20:42.711496  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.711952  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.717315  757567 config.go:182] Loaded profile config "no-preload-970287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:42.717429  757567 out.go:179] * Verifying Kubernetes components...
	I1014 21:20:42.722229  757567 addons.go:69] Setting default-storageclass=true in profile "no-preload-970287"
	I1014 21:20:42.722263  757567 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970287"
	I1014 21:20:42.722614  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.722858  757567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:42.763222  757567 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:20:42.766152  757567 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:42.766184  757567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:20:42.766248  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.787847  757567 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1014 21:20:42.791450  757567 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 21:20:40.329199  758490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810782:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.247540627s)
	I1014 21:20:40.329230  758490 kic.go:203] duration metric: took 5.247690136s to extract preloaded images to volume ...
	W1014 21:20:40.329367  758490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 21:20:40.329478  758490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 21:20:40.417230  758490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-810782 --name auto-810782 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810782 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-810782 --network auto-810782 --ip 192.168.85.2 --volume auto-810782:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 21:20:40.790346  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Running}}
	I1014 21:20:40.815372  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:40.843061  758490 cli_runner.go:164] Run: docker exec auto-810782 stat /var/lib/dpkg/alternatives/iptables
	I1014 21:20:40.911957  758490 oci.go:144] the created container "auto-810782" has a running status.
	I1014 21:20:40.911977  758490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa...
	I1014 21:20:41.325483  758490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 21:20:41.350301  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:41.368982  758490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 21:20:41.369008  758490 kic_runner.go:114] Args: [docker exec --privileged auto-810782 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 21:20:41.447713  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:20:41.477565  758490 machine.go:93] provisionDockerMachine start ...
	I1014 21:20:41.477659  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:41.506039  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:41.506391  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:41.506408  758490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 21:20:41.507084  758490 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60380->127.0.0.1:33832: read: connection reset by peer
	I1014 21:20:42.796936  757567 addons.go:238] Setting addon default-storageclass=true in "no-preload-970287"
	W1014 21:20:42.796957  757567 addons.go:247] addon default-storageclass should already be in state true
	I1014 21:20:42.796982  757567 host.go:66] Checking if "no-preload-970287" exists ...
	I1014 21:20:42.797451  757567 cli_runner.go:164] Run: docker container inspect no-preload-970287 --format={{.State.Status}}
	I1014 21:20:42.797786  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 21:20:42.797804  757567 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 21:20:42.797849  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.802100  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:42.848433  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:42.848461  757567 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:42.848482  757567 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:20:42.848545  757567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970287
	I1014 21:20:42.876376  757567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/no-preload-970287/id_rsa Username:docker}
	I1014 21:20:43.039464  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:20:43.078191  757567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:43.085791  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 21:20:43.085859  757567 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 21:20:43.103257  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:20:43.159340  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 21:20:43.159366  757567 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 21:20:43.237884  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 21:20:43.237912  757567 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 21:20:43.334200  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 21:20:43.334227  757567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 21:20:43.352094  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 21:20:43.352121  757567 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 21:20:43.373705  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 21:20:43.373737  757567 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 21:20:43.392529  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 21:20:43.392571  757567 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 21:20:43.413513  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 21:20:43.413539  757567 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 21:20:43.430765  757567 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:43.430794  757567 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 21:20:43.453827  757567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 21:20:44.700184  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810782
	
	I1014 21:20:44.700205  758490 ubuntu.go:182] provisioning hostname "auto-810782"
	I1014 21:20:44.700284  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:44.750482  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:44.750788  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:44.750799  758490 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-810782 && echo "auto-810782" | sudo tee /etc/hostname
	I1014 21:20:44.945884  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810782
	
	I1014 21:20:44.945969  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:44.988681  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:44.988999  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:44.989015  758490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-810782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-810782/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-810782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 21:20:45.242201  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 21:20:45.242309  758490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-545047/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-545047/.minikube}
	I1014 21:20:45.242344  758490 ubuntu.go:190] setting up certificates
	I1014 21:20:45.242384  758490 provision.go:84] configureAuth start
	I1014 21:20:45.242665  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:45.286182  758490 provision.go:143] copyHostCerts
	I1014 21:20:45.286259  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem, removing ...
	I1014 21:20:45.286270  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem
	I1014 21:20:45.286373  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/ca.pem (1078 bytes)
	I1014 21:20:45.286535  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem, removing ...
	I1014 21:20:45.286546  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem
	I1014 21:20:45.286581  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/cert.pem (1123 bytes)
	I1014 21:20:45.286655  758490 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem, removing ...
	I1014 21:20:45.286661  758490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem
	I1014 21:20:45.286686  758490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-545047/.minikube/key.pem (1675 bytes)
	I1014 21:20:45.286812  758490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem org=jenkins.auto-810782 san=[127.0.0.1 192.168.85.2 auto-810782 localhost minikube]
	I1014 21:20:45.477961  758490 provision.go:177] copyRemoteCerts
	I1014 21:20:45.478060  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 21:20:45.478125  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:45.497213  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:45.617262  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 21:20:45.658796  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1014 21:20:45.686167  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 21:20:45.720595  758490 provision.go:87] duration metric: took 478.160894ms to configureAuth
	I1014 21:20:45.720622  758490 ubuntu.go:206] setting minikube options for container-runtime
	I1014 21:20:45.720850  758490 config.go:182] Loaded profile config "auto-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:20:45.721020  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:45.762407  758490 main.go:141] libmachine: Using SSH client type: native
	I1014 21:20:45.762744  758490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I1014 21:20:45.762766  758490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 21:20:46.137032  758490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 21:20:46.137069  758490 machine.go:96] duration metric: took 4.659478967s to provisionDockerMachine
	I1014 21:20:46.137078  758490 client.go:171] duration metric: took 11.696651136s to LocalClient.Create
	I1014 21:20:46.137092  758490 start.go:167] duration metric: took 11.696727741s to libmachine.API.Create "auto-810782"
	I1014 21:20:46.137126  758490 start.go:293] postStartSetup for "auto-810782" (driver="docker")
	I1014 21:20:46.137144  758490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 21:20:46.137247  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 21:20:46.137327  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.185646  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.301126  758490 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 21:20:46.306832  758490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 21:20:46.306867  758490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 21:20:46.306879  758490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/addons for local assets ...
	I1014 21:20:46.306934  758490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-545047/.minikube/files for local assets ...
	I1014 21:20:46.307017  758490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem -> 5469102.pem in /etc/ssl/certs
	I1014 21:20:46.307134  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 21:20:46.320569  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:46.349015  758490 start.go:296] duration metric: took 211.85106ms for postStartSetup
	I1014 21:20:46.349383  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:46.377904  758490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/config.json ...
	I1014 21:20:46.378192  758490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 21:20:46.378244  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.409945  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.521574  758490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 21:20:46.528941  758490 start.go:128] duration metric: took 12.09209582s to createHost
	I1014 21:20:46.529008  758490 start.go:83] releasing machines lock for "auto-810782", held for 12.092265547s
	I1014 21:20:46.529124  758490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810782
	I1014 21:20:46.559968  758490 ssh_runner.go:195] Run: cat /version.json
	I1014 21:20:46.560024  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.560261  758490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 21:20:46.560351  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:20:46.594501  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.610168  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:20:46.713426  758490 ssh_runner.go:195] Run: systemctl --version
	I1014 21:20:46.833374  758490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 21:20:46.907285  758490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 21:20:46.912478  758490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 21:20:46.912550  758490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 21:20:46.969841  758490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1014 21:20:46.969868  758490 start.go:495] detecting cgroup driver to use...
	I1014 21:20:46.969903  758490 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 21:20:46.969958  758490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 21:20:46.995925  758490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 21:20:47.015403  758490 docker.go:218] disabling cri-docker service (if available) ...
	I1014 21:20:47.015481  758490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 21:20:47.034817  758490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 21:20:47.054874  758490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 21:20:47.277340  758490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 21:20:47.454151  758490 docker.go:234] disabling docker service ...
	I1014 21:20:47.454270  758490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 21:20:47.486089  758490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 21:20:47.505165  758490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 21:20:47.731741  758490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 21:20:47.941374  758490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 21:20:47.956249  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 21:20:47.980987  758490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 21:20:47.981103  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:47.997695  758490 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 21:20:47.997812  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.011147  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.027140  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.042206  758490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 21:20:48.058658  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.075265  758490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.097781  758490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 21:20:48.107989  758490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 21:20:48.123519  758490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 21:20:48.133626  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:48.320790  758490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 21:20:48.519901  758490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 21:20:48.520011  758490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 21:20:48.528071  758490 start.go:563] Will wait 60s for crictl version
	I1014 21:20:48.528184  758490 ssh_runner.go:195] Run: which crictl
	I1014 21:20:48.535424  758490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 21:20:48.594222  758490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 21:20:48.594329  758490 ssh_runner.go:195] Run: crio --version
	I1014 21:20:48.663307  758490 ssh_runner.go:195] Run: crio --version
	I1014 21:20:48.712343  758490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 21:20:48.715288  758490 cli_runner.go:164] Run: docker network inspect auto-810782 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 21:20:48.738546  758490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 21:20:48.742594  758490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:48.763017  758490 kubeadm.go:883] updating cluster {Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 21:20:48.763136  758490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 21:20:48.763188  758490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:48.822344  758490 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:48.822372  758490 crio.go:433] Images already preloaded, skipping extraction
	I1014 21:20:48.822428  758490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 21:20:48.865307  758490 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 21:20:48.865335  758490 cache_images.go:85] Images are preloaded, skipping loading
	I1014 21:20:48.865343  758490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1014 21:20:48.865433  758490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-810782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 21:20:48.865526  758490 ssh_runner.go:195] Run: crio config
	I1014 21:20:48.957944  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:20:48.957971  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:20:48.957991  758490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 21:20:48.958014  758490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-810782 NodeName:auto-810782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 21:20:48.958145  758490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-810782"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 21:20:48.958223  758490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 21:20:48.973608  758490 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 21:20:48.973695  758490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 21:20:48.983922  758490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1014 21:20:49.004628  758490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 21:20:49.032285  758490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1014 21:20:49.050506  758490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 21:20:49.056870  758490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 21:20:49.072747  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:20:51.819048  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.77955122s)
	I1014 21:20:51.819103  757567 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.74089327s)
	I1014 21:20:51.819133  757567 node_ready.go:35] waiting up to 6m0s for node "no-preload-970287" to be "Ready" ...
	I1014 21:20:51.819452  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.716125907s)
	I1014 21:20:51.871888  757567 node_ready.go:49] node "no-preload-970287" is "Ready"
	I1014 21:20:51.871911  757567 node_ready.go:38] duration metric: took 52.759451ms for node "no-preload-970287" to be "Ready" ...
	I1014 21:20:51.871924  757567 api_server.go:52] waiting for apiserver process to appear ...
	I1014 21:20:51.871976  757567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 21:20:52.003540  757567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.549666044s)
	I1014 21:20:52.003894  757567 api_server.go:72] duration metric: took 9.29447948s to wait for apiserver process to appear ...
	I1014 21:20:52.003945  757567 api_server.go:88] waiting for apiserver healthz status ...
	I1014 21:20:52.003983  757567 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 21:20:52.007341  757567 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-970287 addons enable metrics-server
	
	I1014 21:20:52.010348  757567 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1014 21:20:49.254523  758490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:20:49.272147  758490 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782 for IP: 192.168.85.2
	I1014 21:20:49.272173  758490 certs.go:195] generating shared ca certs ...
	I1014 21:20:49.272189  758490 certs.go:227] acquiring lock for ca certs: {Name:mk7160a373bcea9d28f788a254495a083b4d23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.272357  758490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key
	I1014 21:20:49.272402  758490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key
	I1014 21:20:49.272415  758490 certs.go:257] generating profile certs ...
	I1014 21:20:49.272489  758490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key
	I1014 21:20:49.272505  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt with IP's: []
	I1014 21:20:49.472584  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt ...
	I1014 21:20:49.472616  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: {Name:mkbc239d2a8ad48649f591f416c75ebc1b4dcbb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.472801  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key ...
	I1014 21:20:49.472816  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.key: {Name:mkbe66273e5658d249652fdf3003b5dbf104a156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:49.472903  758490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d
	I1014 21:20:49.472924  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1014 21:20:50.114544  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d ...
	I1014 21:20:50.114580  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d: {Name:mka7917892c64e37d3102e6d528343aab6538fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:50.114783  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d ...
	I1014 21:20:50.114802  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d: {Name:mk2e3400af979d440f9d862b57cc7e5c38cbc0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:50.114894  758490 certs.go:382] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt.c4fc698d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt
	I1014 21:20:50.114981  758490 certs.go:386] copying /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key.c4fc698d -> /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key
	I1014 21:20:50.115045  758490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key
	I1014 21:20:50.115063  758490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt with IP's: []
	I1014 21:20:51.209286  758490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt ...
	I1014 21:20:51.209320  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt: {Name:mk8b4f82003bf5e61833421ff91e9236fac30774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:51.209527  758490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key ...
	I1014 21:20:51.209540  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key: {Name:mke0e0ab5d8a9d7aed25e411cccbee7a0f0211e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:20:51.209747  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem (1338 bytes)
	W1014 21:20:51.209801  758490 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910_empty.pem, impossibly tiny 0 bytes
	I1014 21:20:51.209814  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 21:20:51.209843  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/ca.pem (1078 bytes)
	I1014 21:20:51.209886  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/cert.pem (1123 bytes)
	I1014 21:20:51.209911  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/certs/key.pem (1675 bytes)
	I1014 21:20:51.209960  758490 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem (1708 bytes)
	I1014 21:20:51.210604  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 21:20:51.228940  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 21:20:51.246625  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 21:20:51.273420  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 21:20:51.301913  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 21:20:51.337446  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 21:20:51.368822  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 21:20:51.401591  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 21:20:51.438202  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/certs/546910.pem --> /usr/share/ca-certificates/546910.pem (1338 bytes)
	I1014 21:20:51.461489  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/ssl/certs/5469102.pem --> /usr/share/ca-certificates/5469102.pem (1708 bytes)
	I1014 21:20:51.493385  758490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-545047/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 21:20:51.520860  758490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 21:20:51.542019  758490 ssh_runner.go:195] Run: openssl version
	I1014 21:20:51.552286  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/546910.pem && ln -fs /usr/share/ca-certificates/546910.pem /etc/ssl/certs/546910.pem"
	I1014 21:20:51.565070  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.574770  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 20:05 /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.574854  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/546910.pem
	I1014 21:20:51.619166  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/546910.pem /etc/ssl/certs/51391683.0"
	I1014 21:20:51.628035  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469102.pem && ln -fs /usr/share/ca-certificates/5469102.pem /etc/ssl/certs/5469102.pem"
	I1014 21:20:51.636623  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.640987  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 20:05 /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.641064  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469102.pem
	I1014 21:20:51.683674  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5469102.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 21:20:51.697280  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 21:20:51.706180  758490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.713027  758490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.713112  758490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 21:20:51.772120  758490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 21:20:51.785476  758490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 21:20:51.793662  758490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 21:20:51.793757  758490 kubeadm.go:400] StartCluster: {Name:auto-810782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 21:20:51.793861  758490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 21:20:51.793968  758490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 21:20:51.834197  758490 cri.go:89] found id: ""
	I1014 21:20:51.834284  758490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 21:20:51.847308  758490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 21:20:51.855609  758490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 21:20:51.855681  758490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 21:20:51.870905  758490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 21:20:51.870927  758490 kubeadm.go:157] found existing configuration files:
	
	I1014 21:20:51.870991  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 21:20:51.888450  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 21:20:51.888566  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 21:20:51.897983  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 21:20:51.907896  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 21:20:51.907972  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 21:20:51.921484  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 21:20:51.934128  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 21:20:51.934198  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 21:20:51.942755  758490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 21:20:51.957002  758490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 21:20:51.957078  758490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 21:20:51.973628  758490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 21:20:52.059646  758490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 21:20:52.060017  758490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 21:20:52.129358  758490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 21:20:52.131185  758490 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1014 21:20:52.131249  758490 kubeadm.go:318] OS: Linux
	I1014 21:20:52.131309  758490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 21:20:52.131379  758490 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1014 21:20:52.131448  758490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 21:20:52.131518  758490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 21:20:52.131583  758490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 21:20:52.131647  758490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 21:20:52.131710  758490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 21:20:52.131773  758490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 21:20:52.131836  758490 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1014 21:20:52.260885  758490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 21:20:52.261011  758490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 21:20:52.261134  758490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 21:20:52.276425  758490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 21:20:52.281674  758490 out.go:252]   - Generating certificates and keys ...
	I1014 21:20:52.281804  758490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 21:20:52.281886  758490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 21:20:52.596382  758490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 21:20:53.209049  758490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 21:20:53.443833  758490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 21:20:53.848719  758490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 21:20:54.147119  758490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 21:20:54.147479  758490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-810782 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:20:52.013252  757567 addons.go:514] duration metric: took 9.303579028s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1014 21:20:52.029120  757567 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 21:20:52.033462  757567 api_server.go:141] control plane version: v1.34.1
	I1014 21:20:52.033489  757567 api_server.go:131] duration metric: took 29.525217ms to wait for apiserver health ...
	I1014 21:20:52.033498  757567 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 21:20:52.040157  757567 system_pods.go:59] 8 kube-system pods found
	I1014 21:20:52.040246  757567 system_pods.go:61] "coredns-66bc5c9577-m627n" [70a4ed77-8432-4ad1-8ea5-28e6bb93b469] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:20:52.040272  757567 system_pods.go:61] "etcd-no-preload-970287" [47522eab-d9ff-406f-abbf-ea1f58b462e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:52.040364  757567 system_pods.go:61] "kindnet-tps4w" [bbcd8c52-d245-492d-b083-0b93dc854ffd] Running
	I1014 21:20:52.040404  757567 system_pods.go:61] "kube-apiserver-no-preload-970287" [34789a40-0f03-4732-bbab-ec4774cee5ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:52.040426  757567 system_pods.go:61] "kube-controller-manager-no-preload-970287" [84201c4c-7ac7-4cb5-a2d3-cdb4d100ddf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:52.040454  757567 system_pods.go:61] "kube-proxy-c4vlq" [bd169342-2896-4b4e-b062-de3894754634] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:20:52.040490  757567 system_pods.go:61] "kube-scheduler-no-preload-970287" [2c141840-1640-41f3-8d2b-4fb8c4b15f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:52.040515  757567 system_pods.go:61] "storage-provisioner" [b9d1e209-23f4-4c75-b216-869ed982bec3] Running
	I1014 21:20:52.040535  757567 system_pods.go:74] duration metric: took 7.031051ms to wait for pod list to return data ...
	I1014 21:20:52.040572  757567 default_sa.go:34] waiting for default service account to be created ...
	I1014 21:20:52.052642  757567 default_sa.go:45] found service account: "default"
	I1014 21:20:52.052718  757567 default_sa.go:55] duration metric: took 12.123601ms for default service account to be created ...
	I1014 21:20:52.052742  757567 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 21:20:52.154584  757567 system_pods.go:86] 8 kube-system pods found
	I1014 21:20:52.154670  757567 system_pods.go:89] "coredns-66bc5c9577-m627n" [70a4ed77-8432-4ad1-8ea5-28e6bb93b469] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 21:20:52.154695  757567 system_pods.go:89] "etcd-no-preload-970287" [47522eab-d9ff-406f-abbf-ea1f58b462e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 21:20:52.154734  757567 system_pods.go:89] "kindnet-tps4w" [bbcd8c52-d245-492d-b083-0b93dc854ffd] Running
	I1014 21:20:52.154762  757567 system_pods.go:89] "kube-apiserver-no-preload-970287" [34789a40-0f03-4732-bbab-ec4774cee5ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 21:20:52.154787  757567 system_pods.go:89] "kube-controller-manager-no-preload-970287" [84201c4c-7ac7-4cb5-a2d3-cdb4d100ddf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 21:20:52.154822  757567 system_pods.go:89] "kube-proxy-c4vlq" [bd169342-2896-4b4e-b062-de3894754634] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 21:20:52.154849  757567 system_pods.go:89] "kube-scheduler-no-preload-970287" [2c141840-1640-41f3-8d2b-4fb8c4b15f07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 21:20:52.154870  757567 system_pods.go:89] "storage-provisioner" [b9d1e209-23f4-4c75-b216-869ed982bec3] Running
	I1014 21:20:52.154908  757567 system_pods.go:126] duration metric: took 102.143556ms to wait for k8s-apps to be running ...
	I1014 21:20:52.154935  757567 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 21:20:52.155027  757567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 21:20:52.191138  757567 system_svc.go:56] duration metric: took 36.193175ms WaitForService to wait for kubelet
	I1014 21:20:52.191220  757567 kubeadm.go:586] duration metric: took 9.481801277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 21:20:52.191257  757567 node_conditions.go:102] verifying NodePressure condition ...
	I1014 21:20:52.217276  757567 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 21:20:52.217369  757567 node_conditions.go:123] node cpu capacity is 2
	I1014 21:20:52.217398  757567 node_conditions.go:105] duration metric: took 26.106071ms to run NodePressure ...
	I1014 21:20:52.217438  757567 start.go:241] waiting for startup goroutines ...
	I1014 21:20:52.217463  757567 start.go:246] waiting for cluster config update ...
	I1014 21:20:52.217490  757567 start.go:255] writing updated cluster config ...
	I1014 21:20:52.217852  757567 ssh_runner.go:195] Run: rm -f paused
	I1014 21:20:52.226355  757567 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:20:52.260978  757567 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m627n" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 21:20:54.280496  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:20:55.185261  758490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 21:20:55.185672  758490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-810782 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1014 21:20:55.446306  758490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 21:20:55.732688  758490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 21:20:55.973481  758490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 21:20:55.973800  758490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 21:20:56.547457  758490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 21:20:56.803057  758490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 21:20:57.413692  758490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 21:20:57.494152  758490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 21:20:57.818094  758490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 21:20:57.819195  758490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 21:20:57.822164  758490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 21:20:57.826439  758490 out.go:252]   - Booting up control plane ...
	I1014 21:20:57.826543  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 21:20:57.826631  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 21:20:57.827835  758490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 21:20:57.850799  758490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 21:20:57.850918  758490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 21:20:57.860526  758490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 21:20:57.860637  758490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 21:20:57.860684  758490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 21:20:58.080298  758490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 21:20:58.085550  758490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 21:20:59.086936  758490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001697105s
	I1014 21:20:59.097036  758490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 21:20:59.100785  758490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1014 21:20:59.100894  758490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 21:20:59.100976  758490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1014 21:20:56.767804  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:20:59.267478  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:01.275706  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:02.258447  758490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.160022341s
	W1014 21:21:03.771026  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:05.793921  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:06.549884  758490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.452179513s
	I1014 21:21:07.600788  758490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.502881262s
	I1014 21:21:07.661586  758490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 21:21:07.702526  758490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 21:21:07.752185  758490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 21:21:07.753040  758490 kubeadm.go:318] [mark-control-plane] Marking the node auto-810782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 21:21:07.782774  758490 kubeadm.go:318] [bootstrap-token] Using token: owkwev.s128a74ctr4eiv72
	I1014 21:21:07.786453  758490 out.go:252]   - Configuring RBAC rules ...
	I1014 21:21:07.786576  758490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 21:21:07.795523  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 21:21:07.806593  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 21:21:07.813316  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 21:21:07.827682  758490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 21:21:07.838258  758490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 21:21:08.030107  758490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 21:21:08.459835  758490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 21:21:09.019343  758490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 21:21:09.021563  758490 kubeadm.go:318] 
	I1014 21:21:09.021645  758490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 21:21:09.021653  758490 kubeadm.go:318] 
	I1014 21:21:09.021734  758490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 21:21:09.021742  758490 kubeadm.go:318] 
	I1014 21:21:09.021769  758490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 21:21:09.021843  758490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 21:21:09.021901  758490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 21:21:09.021911  758490 kubeadm.go:318] 
	I1014 21:21:09.021974  758490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 21:21:09.021982  758490 kubeadm.go:318] 
	I1014 21:21:09.022043  758490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 21:21:09.022053  758490 kubeadm.go:318] 
	I1014 21:21:09.022112  758490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 21:21:09.022200  758490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 21:21:09.022277  758490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 21:21:09.022286  758490 kubeadm.go:318] 
	I1014 21:21:09.022374  758490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 21:21:09.022458  758490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 21:21:09.022465  758490 kubeadm.go:318] 
	I1014 21:21:09.022552  758490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token owkwev.s128a74ctr4eiv72 \
	I1014 21:21:09.022666  758490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a \
	I1014 21:21:09.022691  758490 kubeadm.go:318] 	--control-plane 
	I1014 21:21:09.022700  758490 kubeadm.go:318] 
	I1014 21:21:09.022797  758490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 21:21:09.022806  758490 kubeadm.go:318] 
	I1014 21:21:09.022891  758490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token owkwev.s128a74ctr4eiv72 \
	I1014 21:21:09.023003  758490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3806753ba46bd941cec8b712d1756473c13fe6f674730f3d4d33a9443ed7fd6a 
	I1014 21:21:09.027249  758490 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1014 21:21:09.027493  758490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1014 21:21:09.027610  758490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 21:21:09.027633  758490 cni.go:84] Creating CNI manager for ""
	I1014 21:21:09.027641  758490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 21:21:09.030747  758490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1014 21:21:09.033564  758490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 21:21:09.040717  758490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 21:21:09.040743  758490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 21:21:09.063489  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1014 21:21:08.272375  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:10.273533  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:09.791321  758490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 21:21:09.791454  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:09.791530  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-810782 minikube.k8s.io/updated_at=2025_10_14T21_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=auto-810782 minikube.k8s.io/primary=true
	I1014 21:21:10.020226  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:10.020295  758490 ops.go:34] apiserver oom_adj: -16
	I1014 21:21:10.520878  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:11.020466  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:11.520714  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:12.020834  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:12.520332  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.021127  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.521113  758490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 21:21:13.755243  758490 kubeadm.go:1113] duration metric: took 3.963834493s to wait for elevateKubeSystemPrivileges
	I1014 21:21:13.755270  758490 kubeadm.go:402] duration metric: took 21.961518228s to StartCluster
	I1014 21:21:13.755291  758490 settings.go:142] acquiring lock: {Name:mk1b080406bdcda2250df1620a9e10e091a80c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:21:13.755351  758490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:21:13.756326  758490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-545047/kubeconfig: {Name:mk6f7796a7ef23453f310c313bc0a4847802f9d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 21:21:13.757147  758490 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 21:21:13.757280  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 21:21:13.757517  758490 config.go:182] Loaded profile config "auto-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:21:13.757548  758490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 21:21:13.757642  758490 addons.go:69] Setting storage-provisioner=true in profile "auto-810782"
	I1014 21:21:13.757656  758490 addons.go:238] Setting addon storage-provisioner=true in "auto-810782"
	I1014 21:21:13.757664  758490 addons.go:69] Setting default-storageclass=true in profile "auto-810782"
	I1014 21:21:13.757681  758490 host.go:66] Checking if "auto-810782" exists ...
	I1014 21:21:13.757681  758490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-810782"
	I1014 21:21:13.758110  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.758163  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.768592  758490 out.go:179] * Verifying Kubernetes components...
	I1014 21:21:13.774711  758490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 21:21:13.817239  758490 addons.go:238] Setting addon default-storageclass=true in "auto-810782"
	I1014 21:21:13.817280  758490 host.go:66] Checking if "auto-810782" exists ...
	I1014 21:21:13.817701  758490 cli_runner.go:164] Run: docker container inspect auto-810782 --format={{.State.Status}}
	I1014 21:21:13.829669  758490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 21:21:13.837882  758490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:21:13.838061  758490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 21:21:13.838148  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:21:13.864932  758490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 21:21:13.864952  758490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 21:21:13.865017  758490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810782
	I1014 21:21:13.903366  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:21:13.910342  758490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/auto-810782/id_rsa Username:docker}
	I1014 21:21:14.197547  758490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 21:21:14.206316  758490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 21:21:14.255642  758490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 21:21:14.264006  758490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 21:21:14.980189  758490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1014 21:21:14.982362  758490 node_ready.go:35] waiting up to 15m0s for node "auto-810782" to be "Ready" ...
	I1014 21:21:15.041364  758490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1014 21:21:12.766642  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:14.766945  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:15.045128  758490 addons.go:514] duration metric: took 1.287547684s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 21:21:15.484357  758490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-810782" context rescaled to 1 replicas
	W1014 21:21:16.985424  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:16.767139  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:19.266772  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:21.267814  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:19.485103  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:21.485707  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:23.986129  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:23.766734  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:26.266643  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:26.485725  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:28.486133  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:28.766884  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	W1014 21:21:30.767357  757567 pod_ready.go:104] pod "coredns-66bc5c9577-m627n" is not "Ready", error: <nil>
	I1014 21:21:31.267811  757567 pod_ready.go:94] pod "coredns-66bc5c9577-m627n" is "Ready"
	I1014 21:21:31.267842  757567 pod_ready.go:86] duration metric: took 39.006827062s for pod "coredns-66bc5c9577-m627n" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.270713  757567 pod_ready.go:83] waiting for pod "etcd-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.275318  757567 pod_ready.go:94] pod "etcd-no-preload-970287" is "Ready"
	I1014 21:21:31.275346  757567 pod_ready.go:86] duration metric: took 4.604629ms for pod "etcd-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.277773  757567 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.282393  757567 pod_ready.go:94] pod "kube-apiserver-no-preload-970287" is "Ready"
	I1014 21:21:31.282421  757567 pod_ready.go:86] duration metric: took 4.621319ms for pod "kube-apiserver-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.284942  757567 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.465354  757567 pod_ready.go:94] pod "kube-controller-manager-no-preload-970287" is "Ready"
	I1014 21:21:31.465385  757567 pod_ready.go:86] duration metric: took 180.41666ms for pod "kube-controller-manager-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:31.666355  757567 pod_ready.go:83] waiting for pod "kube-proxy-c4vlq" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.065701  757567 pod_ready.go:94] pod "kube-proxy-c4vlq" is "Ready"
	I1014 21:21:32.065780  757567 pod_ready.go:86] duration metric: took 399.393414ms for pod "kube-proxy-c4vlq" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.264966  757567 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.671562  757567 pod_ready.go:94] pod "kube-scheduler-no-preload-970287" is "Ready"
	I1014 21:21:32.671641  757567 pod_ready.go:86] duration metric: took 406.648235ms for pod "kube-scheduler-no-preload-970287" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 21:21:32.671669  757567 pod_ready.go:40] duration metric: took 40.445226196s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 21:21:32.740100  757567 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1014 21:21:32.743641  757567 out.go:179] * Done! kubectl is now configured to use "no-preload-970287" cluster and "default" namespace by default
	W1014 21:21:30.985705  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:32.985895  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:35.485395  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:37.485908  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:39.985668  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	W1014 21:21:42.485015  758490 node_ready.go:57] node "auto-810782" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.782253436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.78942861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.79011847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.804633607Z" level=info msg="Created container 1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper" id=4dffee9d-cf07-4a1d-be89-87e907191f9b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.805300115Z" level=info msg="Starting container: 1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09" id=c75934d4-3fba-471b-a036-410d5fe0430a name=/runtime.v1.RuntimeService/StartContainer
	Oct 14 21:21:25 no-preload-970287 conmon[1640]: conmon 1c0bac6eb96e48334975 <ninfo>: container 1642 exited with status 1
	Oct 14 21:21:25 no-preload-970287 crio[653]: time="2025-10-14T21:21:25.810048443Z" level=info msg="Started container" PID=1642 containerID=1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper id=c75934d4-3fba-471b-a036-410d5fe0430a name=/runtime.v1.RuntimeService/StartContainer sandboxID=edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.306036229Z" level=info msg="Removing container: cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.313880554Z" level=info msg="Error loading conmon cgroup of container cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39: cgroup deleted" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:26 no-preload-970287 crio[653]: time="2025-10-14T21:21:26.31956303Z" level=info msg="Removed container cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4/dashboard-metrics-scraper" id=970ddb05-432b-4de3-ba5c-25c035b35896 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.05745126Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066118633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066169226Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.066209638Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069775862Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069837468Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.069861337Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073408657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073451135Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.073475948Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077029693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077068036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.077094129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.080893991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 14 21:21:31 no-preload-970287 crio[653]: time="2025-10-14T21:21:31.080935502Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1c0bac6eb96e4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   edfc02a18831b       dashboard-metrics-scraper-6ffb444bf9-5wgx4   kubernetes-dashboard
	4bcc6a658ca4f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   9581dc6f994aa       storage-provisioner                          kube-system
	b7e2a8d9fee3d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   948afd45059b3       kubernetes-dashboard-855c9754f9-f4mgk        kubernetes-dashboard
	86e64bf2703ce       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   6d2c1ff6b54af       busybox                                      default
	9d950fd859ff9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   83f06b15ec3c9       kube-proxy-c4vlq                             kube-system
	b026968d5f83c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   4299616e037b7       coredns-66bc5c9577-m627n                     kube-system
	e9a7ae186819a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   2fc2e5f5545b8       kindnet-tps4w                                kube-system
	eab921e666565       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           59 seconds ago       Exited              storage-provisioner         1                   9581dc6f994aa       storage-provisioner                          kube-system
	cb346dcfd25d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   528e66b06c719       kube-apiserver-no-preload-970287             kube-system
	882dabee1ff3b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   89c70792134b3       etcd-no-preload-970287                       kube-system
	27cb84754cd92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ec537d951e82d       kube-controller-manager-no-preload-970287    kube-system
	ecf2261fe6065       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   558dc46f76463       kube-scheduler-no-preload-970287             kube-system
	
	
	==> coredns [b026968d5f83c6d9b149d2641a2ce45e5705822b62c6b16cfc79a450a3e1b43b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40626 - 51701 "HINFO IN 5531993140901036532.4936131409708613596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020113941s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-970287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-970287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=no-preload-970287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T21_19_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 21:19:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-970287
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 21:21:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:19:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 21:21:19 +0000   Tue, 14 Oct 2025 21:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-970287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 786ed5ad164b4a7aae3c20ff7b233e05
	  System UUID:                739e9d19-3ca7-40c2-b50a-6b77f8bfdd3e
	  Boot ID:                    069136f0-bcc2-40ca-a457-d9b520acda1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-m627n                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-no-preload-970287                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-tps4w                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-970287              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-no-preload-970287     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-c4vlq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-970287              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5wgx4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f4mgk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m7s                   kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s                   kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m7s                   kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m4s                   node-controller  Node no-preload-970287 event: Registered Node no-preload-970287 in Controller
	  Normal   NodeReady                106s                   kubelet          Node no-preload-970287 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node no-preload-970287 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node no-preload-970287 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node no-preload-970287 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-970287 event: Registered Node no-preload-970287 in Controller
	
	
	==> dmesg <==
	[Oct14 20:50] overlayfs: idmapped layers are currently not supported
	[ +25.424341] overlayfs: idmapped layers are currently not supported
	[Oct14 20:51] overlayfs: idmapped layers are currently not supported
	[ +25.439797] overlayfs: idmapped layers are currently not supported
	[Oct14 20:53] overlayfs: idmapped layers are currently not supported
	[Oct14 20:54] overlayfs: idmapped layers are currently not supported
	[Oct14 20:56] overlayfs: idmapped layers are currently not supported
	[Oct14 20:58] overlayfs: idmapped layers are currently not supported
	[Oct14 20:59] overlayfs: idmapped layers are currently not supported
	[Oct14 21:01] overlayfs: idmapped layers are currently not supported
	[Oct14 21:03] overlayfs: idmapped layers are currently not supported
	[Oct14 21:04] overlayfs: idmapped layers are currently not supported
	[Oct14 21:12] overlayfs: idmapped layers are currently not supported
	[Oct14 21:13] overlayfs: idmapped layers are currently not supported
	[ +32.288963] overlayfs: idmapped layers are currently not supported
	[Oct14 21:15] overlayfs: idmapped layers are currently not supported
	[Oct14 21:16] overlayfs: idmapped layers are currently not supported
	[ +21.775876] overlayfs: idmapped layers are currently not supported
	[Oct14 21:17] overlayfs: idmapped layers are currently not supported
	[Oct14 21:18] overlayfs: idmapped layers are currently not supported
	[Oct14 21:19] overlayfs: idmapped layers are currently not supported
	[ +43.590859] overlayfs: idmapped layers are currently not supported
	[Oct14 21:20] overlayfs: idmapped layers are currently not supported
	[ +26.799189] overlayfs: idmapped layers are currently not supported
	[ +17.106973] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [882dabee1ff3bf2239143b9f9994fbb5ca25e44aff582e09104e12db7b8feef7] <==
	{"level":"warn","ts":"2025-10-14T21:20:45.764130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.808656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.841251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.908739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.961119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:45.997416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.026779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.099191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.122843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.134486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.169534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.212645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.231283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.248456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.277718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.320500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.360599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.405121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.456903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.497056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.517302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.627259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.662203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.714039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-14T21:20:46.865058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:21:50 up  4:04,  0 user,  load average: 4.14, 3.62, 2.52
	Linux no-preload-970287 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9a7ae186819ae864bbab79c2d91361c9ece7809280dd662a139a19b9dad3324] <==
	I1014 21:20:50.604607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 21:20:50.604853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1014 21:20:50.604963       1 main.go:148] setting mtu 1500 for CNI 
	I1014 21:20:50.604974       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 21:20:50.604986       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-14T21:20:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1014 21:20:51.053803       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1014 21:20:51.053878       1 controller.go:381] "Waiting for informer caches to sync"
	I1014 21:20:51.053913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1014 21:20:51.054301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1014 21:21:21.054428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1014 21:21:21.054442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1014 21:21:21.061229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1014 21:21:21.062244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1014 21:21:22.554488       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1014 21:21:22.554539       1 metrics.go:72] Registering metrics
	I1014 21:21:22.554599       1 controller.go:711] "Syncing nftables rules"
	I1014 21:21:31.056450       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:21:31.056507       1 main.go:301] handling current node
	I1014 21:21:41.058198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1014 21:21:41.058231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb346dcfd25d91290bca1f5ab694ec0a28b9aaa55c5a269ab50381f0fbd4e400] <==
	I1014 21:20:48.624627       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 21:20:48.632292       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 21:20:48.659652       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 21:20:48.673236       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1014 21:20:48.673809       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 21:20:48.673821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 21:20:48.673906       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 21:20:48.674092       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 21:20:48.684539       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1014 21:20:48.689718       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 21:20:48.697576       1 cache.go:39] Caches are synced for autoregister controller
	I1014 21:20:48.697712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 21:20:48.697747       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 21:20:48.709726       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 21:20:48.790857       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 21:20:49.185164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 21:20:51.067195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1014 21:20:51.558658       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 21:20:51.697802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 21:20:51.725202       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 21:20:51.966584       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.28.94"}
	I1014 21:20:51.995361       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.239.172"}
	I1014 21:20:53.790766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 21:20:53.842512       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 21:20:54.141326       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [27cb84754cd9227030c4fe42c91f3891d3e08309d600c013e0ba42914cd29efc] <==
	I1014 21:20:53.714856       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 21:20:53.716053       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 21:20:53.723930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1014 21:20:53.724100       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 21:20:53.725164       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1014 21:20:53.727441       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1014 21:20:53.729950       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1014 21:20:53.732667       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1014 21:20:53.733810       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1014 21:20:53.734977       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1014 21:20:53.736133       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 21:20:53.736211       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 21:20:53.736248       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1014 21:20:53.741374       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1014 21:20:53.741463       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 21:20:53.746028       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:53.747308       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1014 21:20:53.747416       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 21:20:53.747509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-970287"
	I1014 21:20:53.747574       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 21:20:53.748919       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 21:20:53.750402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 21:20:53.753889       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 21:20:53.760355       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 21:20:53.766813       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [9d950fd859ff930a00c330f302d797f5be27bec2d4c5ef9580c021fce5851d5d] <==
	I1014 21:20:52.150214       1 server_linux.go:53] "Using iptables proxy"
	I1014 21:20:52.506909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 21:20:52.618092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 21:20:52.618134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1014 21:20:52.618224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 21:20:52.643384       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 21:20:52.643493       1 server_linux.go:132] "Using iptables Proxier"
	I1014 21:20:52.647137       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 21:20:52.647497       1 server.go:527] "Version info" version="v1.34.1"
	I1014 21:20:52.647751       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:52.649198       1 config.go:200] "Starting service config controller"
	I1014 21:20:52.649296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 21:20:52.649349       1 config.go:106] "Starting endpoint slice config controller"
	I1014 21:20:52.649378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 21:20:52.649414       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 21:20:52.649442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 21:20:52.650102       1 config.go:309] "Starting node config controller"
	I1014 21:20:52.652523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 21:20:52.652583       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 21:20:52.750076       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 21:20:52.750170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 21:20:52.750197       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ecf2261fe60659c49b12b831c921d1d8a2f5ae0415991523e41524de6f31cb9a] <==
	I1014 21:20:48.382239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 21:20:48.405838       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 21:20:48.406061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:48.406110       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 21:20:48.406162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1014 21:20:48.440807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 21:20:48.440894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 21:20:48.440943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 21:20:48.495128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1014 21:20:48.511338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 21:20:48.511424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 21:20:48.511476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 21:20:48.511548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 21:20:48.511606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 21:20:48.511660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 21:20:48.511719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 21:20:48.511772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1014 21:20:48.511818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 21:20:48.511865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 21:20:48.511913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 21:20:48.511961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1014 21:20:48.512010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 21:20:48.512123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 21:20:48.512175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1014 21:20:49.912784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.358201     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/545dbf55-bce4-4aed-a463-550b5136ae40-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f4mgk\" (UID: \"545dbf55-bce4-4aed-a463-550b5136ae40\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.358787     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgf2\" (UniqueName: \"kubernetes.io/projected/545dbf55-bce4-4aed-a463-550b5136ae40-kube-api-access-mbgf2\") pod \"kubernetes-dashboard-855c9754f9-f4mgk\" (UID: \"545dbf55-bce4-4aed-a463-550b5136ae40\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.459214     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6d88fcf-ee45-4570-b2da-1c3795d0356a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-5wgx4\" (UID: \"e6d88fcf-ee45-4570-b2da-1c3795d0356a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: I1014 21:20:54.459408     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wh8w\" (UniqueName: \"kubernetes.io/projected/e6d88fcf-ee45-4570-b2da-1c3795d0356a-kube-api-access-7wh8w\") pod \"dashboard-metrics-scraper-6ffb444bf9-5wgx4\" (UID: \"e6d88fcf-ee45-4570-b2da-1c3795d0356a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4"
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: W1014 21:20:54.673084     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958 WatchSource:0}: Error finding container 948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958: Status 404 returned error can't find the container with id 948afd45059b3c04533f589b04f3343070ecff7b1b6b0b53cb21c706dc376958
	Oct 14 21:20:54 no-preload-970287 kubelet[772]: W1014 21:20:54.694090     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ce7c6b3a1e219f2e19d5d7a8c26424071f1a4cf3c4866cd6edc3e0da56cf497/crio-edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68 WatchSource:0}: Error finding container edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68: Status 404 returned error can't find the container with id edfc02a18831be8585747bf4198b47582a110453ecb5f56563461ccc9da56b68
	Oct 14 21:21:09 no-preload-970287 kubelet[772]: I1014 21:21:09.255956     772 scope.go:117] "RemoveContainer" containerID="086e4dc527389209e7131154cc3e883964f2a74b62f02262310b311cbd38636e"
	Oct 14 21:21:09 no-preload-970287 kubelet[772]: I1014 21:21:09.289256     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f4mgk" podStartSLOduration=8.133468338 podStartE2EDuration="15.289239359s" podCreationTimestamp="2025-10-14 21:20:54 +0000 UTC" firstStartedPulling="2025-10-14 21:20:54.677032033 +0000 UTC m=+13.439887693" lastFinishedPulling="2025-10-14 21:21:01.832803054 +0000 UTC m=+20.595658714" observedRunningTime="2025-10-14 21:21:02.264254717 +0000 UTC m=+21.027110369" watchObservedRunningTime="2025-10-14 21:21:09.289239359 +0000 UTC m=+28.052095011"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: I1014 21:21:10.260266     772 scope.go:117] "RemoveContainer" containerID="086e4dc527389209e7131154cc3e883964f2a74b62f02262310b311cbd38636e"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: I1014 21:21:10.260590     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:10 no-preload-970287 kubelet[772]: E1014 21:21:10.260740     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:11 no-preload-970287 kubelet[772]: I1014 21:21:11.265174     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:11 no-preload-970287 kubelet[772]: E1014 21:21:11.265394     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:14 no-preload-970287 kubelet[772]: I1014 21:21:14.640802     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:14 no-preload-970287 kubelet[772]: E1014 21:21:14.640984     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:22 no-preload-970287 kubelet[772]: I1014 21:21:22.290623     772 scope.go:117] "RemoveContainer" containerID="eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416"
	Oct 14 21:21:25 no-preload-970287 kubelet[772]: I1014 21:21:25.778249     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: I1014 21:21:26.303923     772 scope.go:117] "RemoveContainer" containerID="cae9e5ee03bef7f43cd086d714ccad801abb6ba4e1f5d3dba576280eb63afe39"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: I1014 21:21:26.304234     772 scope.go:117] "RemoveContainer" containerID="1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	Oct 14 21:21:26 no-preload-970287 kubelet[772]: E1014 21:21:26.304420     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:34 no-preload-970287 kubelet[772]: I1014 21:21:34.640953     772 scope.go:117] "RemoveContainer" containerID="1c0bac6eb96e4833497570b1054b50fb2a2cdbc485bc3e4f08eb66f614245d09"
	Oct 14 21:21:34 no-preload-970287 kubelet[772]: E1014 21:21:34.641607     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5wgx4_kubernetes-dashboard(e6d88fcf-ee45-4570-b2da-1c3795d0356a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5wgx4" podUID="e6d88fcf-ee45-4570-b2da-1c3795d0356a"
	Oct 14 21:21:44 no-preload-970287 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 14 21:21:45 no-preload-970287 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 14 21:21:45 no-preload-970287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b7e2a8d9fee3df01c06c63bcdfedfa290e5f10824bd890a50ba040a29633a6ff] <==
	2025/10/14 21:21:01 Using namespace: kubernetes-dashboard
	2025/10/14 21:21:01 Using in-cluster config to connect to apiserver
	2025/10/14 21:21:01 Using secret token for csrf signing
	2025/10/14 21:21:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/14 21:21:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/14 21:21:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/14 21:21:01 Generating JWE encryption key
	2025/10/14 21:21:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/14 21:21:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/14 21:21:02 Initializing JWE encryption key from synchronized object
	2025/10/14 21:21:02 Creating in-cluster Sidecar client
	2025/10/14 21:21:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:21:02 Serving insecurely on HTTP port: 9090
	2025/10/14 21:21:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/14 21:21:01 Starting overwatch
	
	
	==> storage-provisioner [4bcc6a658ca4f326964a68891e92ff5132a7a58c21942626f11381fb247fb86f] <==
	I1014 21:21:22.354154       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 21:21:22.354237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1014 21:21:22.357076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:25.815671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:30.077466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:33.676061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:36.729611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.751697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.756288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:21:39.756461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 21:21:39.756652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616!
	I1014 21:21:39.757831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff2602c-0892-49a1-af19-280368cb1cc0", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616 became leader
	W1014 21:21:39.765185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:39.768759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1014 21:21:39.857085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-970287_ac067841-903c-49d5-a35a-ed0ccd22f616!
	W1014 21:21:41.771916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:41.777196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:43.781296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:43.785887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:45.789066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:45.795671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:47.799692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:47.806230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:49.809123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 21:21:49.814763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [eab921e666565670bd355579f8f27093ec17e0451ca0dd9059faa6ccbfe4f416] <==
	I1014 21:20:51.349630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 21:21:21.351455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970287 -n no-preload-970287
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970287 -n no-preload-970287: exit status 2 (365.966288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-970287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.41s)

                                                
                                    

Test pass (254/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 37.34
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.19
9 TestDownloadOnly/v1.28.0/DeleteAll 0.35
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 39.1
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 182.02
31 TestAddons/serial/GCPAuth/Namespaces 0.22
48 TestAddons/StoppedEnableDisable 12.22
49 TestCertOptions 36.92
50 TestCertExpiration 236.6
59 TestErrorSpam/setup 31.9
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.07
62 TestErrorSpam/pause 7.2
63 TestErrorSpam/unpause 6.67
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0.01
68 TestFunctional/serial/StartWithProxy 80.65
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.78
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
76 TestFunctional/serial/CacheCmd/cache/add_local 1.1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 36.42
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.53
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 4.68
90 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DashboardCmd 9.31
92 TestFunctional/parallel/DryRun 0.63
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.27
99 TestFunctional/parallel/AddonsCmd 0.18
100 TestFunctional/parallel/PersistentVolumeClaim 25.93
102 TestFunctional/parallel/SSHCmd 0.71
103 TestFunctional/parallel/CpCmd 2.44
105 TestFunctional/parallel/FileSync 0.33
106 TestFunctional/parallel/CertSync 2.19
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
114 TestFunctional/parallel/License 0.32
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.41
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
130 TestFunctional/parallel/MountCmd/any-port 7.68
131 TestFunctional/parallel/MountCmd/specific-port 1.88
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.42
133 TestFunctional/parallel/ServiceCmd/List 0.61
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.34
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.04
145 TestFunctional/parallel/ImageCommands/Setup 0.73
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 210.4
164 TestMultiControlPlane/serial/DeployApp 6.2
165 TestMultiControlPlane/serial/PingHostFromPods 1.54
166 TestMultiControlPlane/serial/AddWorkerNode 57.86
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
169 TestMultiControlPlane/serial/CopyFile 19.85
170 TestMultiControlPlane/serial/StopSecondaryNode 12.79
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
172 TestMultiControlPlane/serial/RestartSecondaryNode 33.16
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
177 TestMultiControlPlane/serial/StopCluster 23.93
178 TestMultiControlPlane/serial/RestartCluster 78.2
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
180 TestMultiControlPlane/serial/AddSecondaryNode 80.82
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 81.2
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.69
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 69.81
211 TestKicCustomNetwork/use_default_bridge_network 32.81
212 TestKicExistingNetwork 38
213 TestKicCustomSubnet 39.69
214 TestKicStaticIP 33.31
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 80.4
219 TestMountStart/serial/StartWithMountFirst 7.3
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.69
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 8.04
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 134.55
231 TestMultiNode/serial/DeployApp2Nodes 4.77
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 58.37
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.64
237 TestMultiNode/serial/StopNode 2.34
238 TestMultiNode/serial/StartAfterStop 8.25
239 TestMultiNode/serial/RestartKeepsNodes 76.15
240 TestMultiNode/serial/DeleteNode 5.56
241 TestMultiNode/serial/StopMultiNode 23.82
242 TestMultiNode/serial/RestartMultiNode 57.15
243 TestMultiNode/serial/ValidateNameConflict 37.79
248 TestPreload 162.16
250 TestScheduledStopUnix 103.98
253 TestInsufficientStorage 13.58
254 TestRunningBinaryUpgrade 57.09
256 TestKubernetesUpgrade 349.92
257 TestMissingContainerUpgrade 120.3
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 37.66
261 TestNoKubernetes/serial/StartWithStopK8s 114.88
262 TestNoKubernetes/serial/Start 9.51
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
264 TestNoKubernetes/serial/ProfileList 1.45
265 TestNoKubernetes/serial/Stop 1.41
266 TestNoKubernetes/serial/StartNoArgs 8.24
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
268 TestStoppedBinaryUpgrade/Setup 1.76
269 TestStoppedBinaryUpgrade/Upgrade 55.42
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
279 TestPause/serial/Start 79.01
280 TestPause/serial/SecondStartNoReconfiguration 27.59
289 TestNetworkPlugins/group/false 3.68
294 TestStartStop/group/old-k8s-version/serial/FirstStart 59.35
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
297 TestStartStop/group/old-k8s-version/serial/Stop 11.89
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
299 TestStartStop/group/old-k8s-version/serial/SecondStart 51.61
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.98
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
307 TestStartStop/group/embed-certs/serial/FirstStart 83.09
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
311 TestStartStop/group/embed-certs/serial/DeployApp 8.32
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.62
315 TestStartStop/group/embed-certs/serial/Stop 12.33
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 54.91
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/no-preload/serial/FirstStart 73.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
329 TestStartStop/group/newest-cni/serial/FirstStart 48.1
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.27
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
334 TestStartStop/group/no-preload/serial/DeployApp 8.52
335 TestStartStop/group/newest-cni/serial/SecondStart 17.77
337 TestStartStop/group/no-preload/serial/Stop 12.23
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
343 TestStartStop/group/no-preload/serial/SecondStart 61.51
344 TestNetworkPlugins/group/auto/Start 83.79
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
349 TestNetworkPlugins/group/kindnet/Start 90.11
350 TestNetworkPlugins/group/auto/KubeletFlags 0.33
351 TestNetworkPlugins/group/auto/NetCatPod 12.3
352 TestNetworkPlugins/group/auto/DNS 0.23
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/calico/Start 55.83
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.37
361 TestNetworkPlugins/group/calico/NetCatPod 11.28
362 TestNetworkPlugins/group/kindnet/DNS 0.25
363 TestNetworkPlugins/group/kindnet/Localhost 0.17
364 TestNetworkPlugins/group/kindnet/HairPin 0.18
365 TestNetworkPlugins/group/calico/DNS 0.23
366 TestNetworkPlugins/group/calico/Localhost 0.18
367 TestNetworkPlugins/group/calico/HairPin 0.19
368 TestNetworkPlugins/group/custom-flannel/Start 70.13
369 TestNetworkPlugins/group/enable-default-cni/Start 51.84
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
377 TestNetworkPlugins/group/custom-flannel/DNS 0.21
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
380 TestNetworkPlugins/group/flannel/Start 67.87
381 TestNetworkPlugins/group/bridge/Start 73.51
382 TestNetworkPlugins/group/flannel/ControllerPod 6
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
384 TestNetworkPlugins/group/flannel/NetCatPod 10.26
385 TestNetworkPlugins/group/flannel/DNS 0.2
386 TestNetworkPlugins/group/flannel/Localhost 0.13
387 TestNetworkPlugins/group/flannel/HairPin 0.13
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
389 TestNetworkPlugins/group/bridge/NetCatPod 10.27
390 TestNetworkPlugins/group/bridge/DNS 0.18
391 TestNetworkPlugins/group/bridge/Localhost 0.19
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (37.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-929479 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-929479 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.341918684s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (37.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1014 19:57:59.961405  546910 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1014 19:57:59.961479  546910 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-929479
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-929479: exit status 85 (193.628004ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-929479 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-929479 │ jenkins │ v1.37.0 │ 14 Oct 25 19:57 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:57:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:57:22.661951  546915 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:57:22.662132  546915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:57:22.662161  546915 out.go:374] Setting ErrFile to fd 2...
	I1014 19:57:22.662184  546915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:57:22.662578  546915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	W1014 19:57:22.662924  546915 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-545047/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-545047/.minikube/config/config.json: no such file or directory
	I1014 19:57:22.663433  546915 out.go:368] Setting JSON to true
	I1014 19:57:22.664338  546915 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9593,"bootTime":1760462250,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 19:57:22.664439  546915 start.go:141] virtualization:  
	I1014 19:57:22.668647  546915 out.go:99] [download-only-929479] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1014 19:57:22.668868  546915 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 19:57:22.668911  546915 notify.go:220] Checking for updates...
	I1014 19:57:22.671765  546915 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:57:22.674900  546915 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:57:22.677976  546915 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:57:22.680896  546915 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 19:57:22.683765  546915 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 19:57:22.689496  546915 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:57:22.689775  546915 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:57:22.712889  546915 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 19:57:22.712995  546915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:57:22.769859  546915 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-14 19:57:22.760682569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:57:22.769975  546915 docker.go:318] overlay module found
	I1014 19:57:22.772994  546915 out.go:99] Using the docker driver based on user configuration
	I1014 19:57:22.773033  546915 start.go:305] selected driver: docker
	I1014 19:57:22.773043  546915 start.go:925] validating driver "docker" against <nil>
	I1014 19:57:22.773150  546915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:57:22.827956  546915 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-14 19:57:22.818770367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:57:22.828111  546915 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:57:22.828403  546915 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1014 19:57:22.828565  546915 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:57:22.831583  546915 out.go:171] Using Docker driver with root privileges
	I1014 19:57:22.834534  546915 cni.go:84] Creating CNI manager for ""
	I1014 19:57:22.834601  546915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:57:22.834614  546915 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:57:22.834694  546915 start.go:349] cluster config:
	{Name:download-only-929479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-929479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:57:22.837616  546915 out.go:99] Starting "download-only-929479" primary control-plane node in "download-only-929479" cluster
	I1014 19:57:22.837649  546915 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:57:22.840460  546915 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:57:22.840487  546915 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:57:22.840656  546915 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:57:22.856360  546915 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:57:22.856566  546915 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:57:22.856662  546915 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:57:22.916165  546915 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1014 19:57:22.916188  546915 cache.go:58] Caching tarball of preloaded images
	I1014 19:57:22.916373  546915 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:57:22.919716  546915 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1014 19:57:22.919743  546915 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1014 19:57:23.021956  546915 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1014 19:57:23.022089  546915 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1014 19:57:27.612944  546915 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	
	
	* The control-plane node download-only-929479 host does not exist
	  To start a cluster, run: "minikube start -p download-only-929479"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-929479
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (39.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-571258 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-571258 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.094974977s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (39.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1014 19:58:39.740869  546910 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:58:39.740905  546910 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-571258
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-571258: exit status 85 (92.406344ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-929479 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-929479 │ jenkins │ v1.37.0 │ 14 Oct 25 19:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ delete  │ -p download-only-929479                                                                                                                                                   │ download-only-929479 │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │ 14 Oct 25 19:58 UTC │
	│ start   │ -o=json --download-only -p download-only-571258 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-571258 │ jenkins │ v1.37.0 │ 14 Oct 25 19:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:58:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:58:00.690368  547117 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:58:00.690564  547117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:00.690592  547117 out.go:374] Setting ErrFile to fd 2...
	I1014 19:58:00.690612  547117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:58:00.690930  547117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 19:58:00.691442  547117 out.go:368] Setting JSON to true
	I1014 19:58:00.692337  547117 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9631,"bootTime":1760462250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 19:58:00.692441  547117 start.go:141] virtualization:  
	I1014 19:58:00.695777  547117 out.go:99] [download-only-571258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 19:58:00.695973  547117 notify.go:220] Checking for updates...
	I1014 19:58:00.699039  547117 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:58:00.702187  547117 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:58:00.705203  547117 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 19:58:00.708036  547117 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 19:58:00.710859  547117 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 19:58:00.716366  547117 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:58:00.716614  547117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:58:00.741849  547117 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 19:58:00.741961  547117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:00.797726  547117 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:00.788191476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:00.797831  547117 docker.go:318] overlay module found
	I1014 19:58:00.800710  547117 out.go:99] Using the docker driver based on user configuration
	I1014 19:58:00.800752  547117 start.go:305] selected driver: docker
	I1014 19:58:00.800760  547117 start.go:925] validating driver "docker" against <nil>
	I1014 19:58:00.800868  547117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:58:00.864729  547117 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-14 19:58:00.854787126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 19:58:00.864885  547117 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:58:00.865174  547117 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1014 19:58:00.865332  547117 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:58:00.868427  547117 out.go:171] Using Docker driver with root privileges
	I1014 19:58:00.871191  547117 cni.go:84] Creating CNI manager for ""
	I1014 19:58:00.871260  547117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:58:00.871273  547117 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:58:00.871352  547117 start.go:349] cluster config:
	{Name:download-only-571258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-571258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:58:00.874346  547117 out.go:99] Starting "download-only-571258" primary control-plane node in "download-only-571258" cluster
	I1014 19:58:00.874368  547117 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:58:00.877245  547117 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:58:00.877282  547117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:58:00.877393  547117 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:58:00.897962  547117 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:58:00.898088  547117 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:58:00.898106  547117 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1014 19:58:00.898111  547117 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1014 19:58:00.898118  547117 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1014 19:58:00.944630  547117 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1014 19:58:00.944657  547117 cache.go:58] Caching tarball of preloaded images
	I1014 19:58:00.944840  547117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:58:00.948127  547117 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1014 19:58:00.948165  547117 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1014 19:58:01.042075  547117 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1014 19:58:01.042129  547117 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21409-545047/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-571258 host does not exist
	  To start a cluster, run: "minikube start -p download-only-571258"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-571258
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 19:58:40.917422  546910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-998704 --alsologtostderr --binary-mirror http://127.0.0.1:42403 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-998704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-998704
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864166
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-864166: exit status 85 (74.489003ms)

                                                
                                                
-- stdout --
	* Profile "addons-864166" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864166"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864166
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-864166: exit status 85 (77.670538ms)

                                                
                                                
-- stdout --
	* Profile "addons-864166" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864166"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (182.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-864166 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-864166 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m2.020939599s)
--- PASS: TestAddons/Setup (182.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-864166 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-864166 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-864166
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-864166: (11.910078338s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864166
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864166
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-864166
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (36.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1014 21:13:07.472335  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:13:24.374343  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-967887 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.26430901s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-967887 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-967887 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-967887 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-967887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-967887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-967887: (1.956307726s)
--- PASS: TestCertOptions (36.92s)

                                                
                                    
x
+
TestCertExpiration (236.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-341199 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.938260128s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-341199 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.869654058s)
helpers_test.go:175: Cleaning up "cert-expiration-341199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-341199
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-341199: (2.795387935s)
--- PASS: TestCertExpiration (236.60s)

                                                
                                    
x
+
TestErrorSpam/setup (31.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-005006 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-005006 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-005006 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-005006 --driver=docker  --container-runtime=crio: (31.899424531s)
--- PASS: TestErrorSpam/setup (31.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (7.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause: exit status 80 (2.468030056s)

                                                
                                                
-- stdout --
	* Pausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause: exit status 80 (2.218855424s)

                                                
                                                
-- stdout --
	* Pausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause: exit status 80 (2.507568937s)

                                                
                                                
-- stdout --
	* Pausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause: exit status 80 (2.348712648s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause: exit status 80 (2.178285589s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause: exit status 80 (2.145659499s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-005006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-14T20:05:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.67s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 stop: (1.221289464s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-005006 --log_dir /tmp/nospam-005006 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-545047/.minikube/files/etc/test/nested/copy/546910/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1014 20:06:44.402283  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.408686  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.420085  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.441573  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.483007  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.564535  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:44.725932  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:45.047338  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:45.691087  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:46.973050  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:49.534880  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:54.656866  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:07:04.898211  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-276405 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.647466973s)
--- PASS: TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 20:07:18.832054  546910 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --alsologtostderr -v=8
E1014 20:07:25.379573  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-276405 --alsologtostderr -v=8: (28.769844383s)
functional_test.go:678: soft start took 28.776618771s for "functional-276405" cluster.
I1014 20:07:47.602220  546910 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (28.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-276405 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:3.1: (1.160248035s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:3.3: (1.251178816s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 cache add registry.k8s.io/pause:latest: (1.168983723s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-276405 /tmp/TestFunctionalserialCacheCmdcacheadd_local3012031670/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache add minikube-local-cache-test:functional-276405
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache delete minikube-local-cache-test:functional-276405
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-276405
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (321.665943ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 kubectl -- --context functional-276405 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-276405 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1014 20:08:06.342709  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-276405 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.415863783s)
functional_test.go:776: restart took 36.415976122s for "functional-276405" cluster.
I1014 20:08:31.586887  546910 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-276405 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 logs: (1.525596955s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 logs --file /tmp/TestFunctionalserialLogsFileCmd687590462/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 logs --file /tmp/TestFunctionalserialLogsFileCmd687590462/001/logs.txt: (1.493540937s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-276405 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-276405
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-276405: exit status 115 (408.014329ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32622 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-276405 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-276405 delete -f testdata/invalidsvc.yaml: (1.035741814s)
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 config get cpus: exit status 14 (73.617092ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 config get cpus: exit status 14 (58.41974ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-276405 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-276405 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 574172: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-276405 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (256.407136ms)

                                                
                                                
-- stdout --
	* [functional-276405] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:19:10.558653  573556 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:19:10.558814  573556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:10.558837  573556 out.go:374] Setting ErrFile to fd 2...
	I1014 20:19:10.558848  573556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:10.559137  573556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:19:10.560367  573556 out.go:368] Setting JSON to false
	I1014 20:19:10.562287  573556 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10901,"bootTime":1760462250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:19:10.562460  573556 start.go:141] virtualization:  
	I1014 20:19:10.565939  573556 out.go:179] * [functional-276405] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 20:19:10.569186  573556 notify.go:220] Checking for updates...
	I1014 20:19:10.574263  573556 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:19:10.577148  573556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:19:10.580119  573556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:19:10.582977  573556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:19:10.585835  573556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:19:10.588815  573556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:19:10.593232  573556 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:19:10.594004  573556 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:19:10.625794  573556 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:19:10.625985  573556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:19:10.710038  573556 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 20:19:10.69975518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:19:10.710157  573556 docker.go:318] overlay module found
	I1014 20:19:10.713079  573556 out.go:179] * Using the docker driver based on existing profile
	I1014 20:19:10.718629  573556 start.go:305] selected driver: docker
	I1014 20:19:10.718652  573556 start.go:925] validating driver "docker" against &{Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:19:10.718740  573556 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:19:10.722053  573556 out.go:203] 
	W1014 20:19:10.724899  573556 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 20:19:10.727651  573556 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-276405 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-276405 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (279.181131ms)

                                                
                                                
-- stdout --
	* [functional-276405] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:19:10.293725  573478 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:19:10.293878  573478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:10.293884  573478 out.go:374] Setting ErrFile to fd 2...
	I1014 20:19:10.293888  573478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:19:10.295713  573478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:19:10.296160  573478 out.go:368] Setting JSON to false
	I1014 20:19:10.297273  573478 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10901,"bootTime":1760462250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 20:19:10.297362  573478 start.go:141] virtualization:  
	I1014 20:19:10.300895  573478 out.go:179] * [functional-276405] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1014 20:19:10.306060  573478 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:19:10.306161  573478 notify.go:220] Checking for updates...
	I1014 20:19:10.312383  573478 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:19:10.315292  573478 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 20:19:10.318788  573478 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 20:19:10.321206  573478 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 20:19:10.324104  573478 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:19:10.327476  573478 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:19:10.328128  573478 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:19:10.358907  573478 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 20:19:10.359025  573478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:19:10.452362  573478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 20:19:10.44261122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:19:10.452470  573478 docker.go:318] overlay module found
	I1014 20:19:10.455662  573478 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 20:19:10.458548  573478 start.go:305] selected driver: docker
	I1014 20:19:10.458574  573478 start.go:925] validating driver "docker" against &{Name:functional-276405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-276405 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:19:10.458666  573478 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:19:10.462161  573478 out.go:203] 
	W1014 20:19:10.466657  573478 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 20:19:10.469726  573478 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c6b514bf-1462-4371-838d-7da460e2a035] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003486884s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-276405 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-276405 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-276405 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-276405 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [23e676cf-a292-4909-b336-2082e75b2e58] Pending
helpers_test.go:352: "sp-pod" [23e676cf-a292-4909-b336-2082e75b2e58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [23e676cf-a292-4909-b336-2082e75b2e58] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003237762s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-276405 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-276405 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-276405 delete -f testdata/storage-provisioner/pod.yaml: (1.005087572s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-276405 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [263f28b0-bae1-438f-96d9-6561fb1c8935] Pending
helpers_test.go:352: "sp-pod" [263f28b0-bae1-438f-96d9-6561fb1c8935] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003116673s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-276405 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh -n functional-276405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cp functional-276405:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd614882265/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh -n functional-276405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh -n functional-276405 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/546910/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /etc/test/nested/copy/546910/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/546910.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /etc/ssl/certs/546910.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/546910.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /usr/share/ca-certificates/546910.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5469102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /etc/ssl/certs/5469102.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5469102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /usr/share/ca-certificates/5469102.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-276405 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "sudo systemctl is-active docker": exit status 1 (373.803778ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "sudo systemctl is-active containerd": exit status 1 (348.029995ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 569888: os: process already finished
helpers_test.go:525: unable to kill pid 569697: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-276405 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a10ed157-b72a-4290-a134-375337f98e9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a10ed157-b72a-4290-a134-375337f98e9c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003966064s
I1014 20:08:51.792926  546910 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-276405 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.144.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-276405 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "353.530562ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.733759ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "343.217944ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.399895ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdany-port3299059062/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760473136949151635" to /tmp/TestFunctionalparallelMountCmdany-port3299059062/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760473136949151635" to /tmp/TestFunctionalparallelMountCmdany-port3299059062/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760473136949151635" to /tmp/TestFunctionalparallelMountCmdany-port3299059062/001/test-1760473136949151635
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.007503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 20:18:57.329942  546910 retry.go:31] will retry after 298.415752ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 20:18 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 20:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 20:18 test-1760473136949151635
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh cat /mount-9p/test-1760473136949151635
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-276405 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [89f540ad-1741-4e32-a6d0-449e7d79ebd5] Pending
helpers_test.go:352: "busybox-mount" [89f540ad-1741-4e32-a6d0-449e7d79ebd5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [89f540ad-1741-4e32-a6d0-449e7d79ebd5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [89f540ad-1741-4e32-a6d0-449e7d79ebd5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007207149s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-276405 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdany-port3299059062/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdspecific-port1610252827/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.127181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 20:19:04.984799  546910 retry.go:31] will retry after 488.13666ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdspecific-port1610252827/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "sudo umount -f /mount-9p": exit status 1 (272.247455ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-276405 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdspecific-port1610252827/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T" /mount1: exit status 1 (582.821488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 20:19:07.092288  546910 retry.go:31] will retry after 714.164622ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-276405 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-276405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227394297/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 service list -o json
functional_test.go:1504: Took "622.387022ms" to run "out/minikube-linux-arm64 -p functional-276405 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 version -o=json --components: (1.334884728s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-276405 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-276405 image ls --format short --alsologtostderr:
I1014 20:19:24.381819  575934 out.go:360] Setting OutFile to fd 1 ...
I1014 20:19:24.382021  575934 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.382034  575934 out.go:374] Setting ErrFile to fd 2...
I1014 20:19:24.382038  575934 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.382338  575934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
I1014 20:19:24.382999  575934 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.383164  575934 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.383659  575934 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
I1014 20:19:24.408652  575934 ssh_runner.go:195] Run: systemctl --version
I1014 20:19:24.408732  575934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
I1014 20:19:24.431666  575934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
I1014 20:19:24.538984  575934 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-276405 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-276405 image ls --format table --alsologtostderr:
I1014 20:19:25.180103  576165 out.go:360] Setting OutFile to fd 1 ...
I1014 20:19:25.180225  576165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:25.180234  576165 out.go:374] Setting ErrFile to fd 2...
I1014 20:19:25.180239  576165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:25.180590  576165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
I1014 20:19:25.181220  576165 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:25.181342  576165 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:25.181817  576165 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
I1014 20:19:25.199225  576165 ssh_runner.go:195] Run: systemctl --version
I1014 20:19:25.199289  576165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
I1014 20:19:25.219829  576165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
I1014 20:19:25.323207  576165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-276405 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddd
ad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/li
brary/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker
.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc
25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc
1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-276405 image ls --format json --alsologtostderr:
I1014 20:19:24.929111  576104 out.go:360] Setting OutFile to fd 1 ...
I1014 20:19:24.929667  576104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.929695  576104 out.go:374] Setting ErrFile to fd 2...
I1014 20:19:24.929715  576104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.929998  576104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
I1014 20:19:24.930636  576104 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.930796  576104 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.931569  576104 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
I1014 20:19:24.954536  576104 ssh_runner.go:195] Run: systemctl --version
I1014 20:19:24.954584  576104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
I1014 20:19:24.976523  576104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
I1014 20:19:25.083625  576104 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-276405 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-276405 image ls --format yaml --alsologtostderr:
I1014 20:19:24.639976  576034 out.go:360] Setting OutFile to fd 1 ...
I1014 20:19:24.640159  576034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.640172  576034 out.go:374] Setting ErrFile to fd 2...
I1014 20:19:24.640177  576034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.640442  576034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
I1014 20:19:24.641057  576034 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.641172  576034 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.641621  576034 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
I1014 20:19:24.664671  576034 ssh_runner.go:195] Run: systemctl --version
I1014 20:19:24.664740  576034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
I1014 20:19:24.698310  576034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
I1014 20:19:24.821749  576034 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-276405 ssh pgrep buildkitd: exit status 1 (336.033737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image build -t localhost/my-image:functional-276405 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-276405 image build -t localhost/my-image:functional-276405 testdata/build --alsologtostderr: (3.470684694s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-276405 image build -t localhost/my-image:functional-276405 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 20c64711f58
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-276405
--> 0ca756f2d4c
Successfully tagged localhost/my-image:functional-276405
0ca756f2d4ceef7badc7322777ae14dd3975a65bcd621d54e9ee71d8727ab6a7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-276405 image build -t localhost/my-image:functional-276405 testdata/build --alsologtostderr:
I1014 20:19:24.728016  576054 out.go:360] Setting OutFile to fd 1 ...
I1014 20:19:24.729023  576054 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.729064  576054 out.go:374] Setting ErrFile to fd 2...
I1014 20:19:24.729084  576054 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:19:24.729364  576054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
I1014 20:19:24.730018  576054 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.730711  576054 config.go:182] Loaded profile config "functional-276405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:19:24.731180  576054 cli_runner.go:164] Run: docker container inspect functional-276405 --format={{.State.Status}}
I1014 20:19:24.757943  576054 ssh_runner.go:195] Run: systemctl --version
I1014 20:19:24.757993  576054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-276405
I1014 20:19:24.779694  576054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/functional-276405/id_rsa Username:docker}
I1014 20:19:24.887851  576054 build_images.go:161] Building image from path: /tmp/build.535667749.tar
I1014 20:19:24.887920  576054 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 20:19:24.896453  576054 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.535667749.tar
I1014 20:19:24.901750  576054 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.535667749.tar: stat -c "%s %y" /var/lib/minikube/build/build.535667749.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.535667749.tar': No such file or directory
I1014 20:19:24.901783  576054 ssh_runner.go:362] scp /tmp/build.535667749.tar --> /var/lib/minikube/build/build.535667749.tar (3072 bytes)
I1014 20:19:24.922909  576054 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.535667749
I1014 20:19:24.931456  576054 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.535667749 -xf /var/lib/minikube/build/build.535667749.tar
I1014 20:19:24.944845  576054 crio.go:315] Building image: /var/lib/minikube/build/build.535667749
I1014 20:19:24.944919  576054 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-276405 /var/lib/minikube/build/build.535667749 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1014 20:19:28.117257  576054 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-276405 /var/lib/minikube/build/build.535667749 --cgroup-manager=cgroupfs: (3.172315163s)
I1014 20:19:28.117323  576054 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.535667749
I1014 20:19:28.125244  576054 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.535667749.tar
I1014 20:19:28.133468  576054 build_images.go:217] Built localhost/my-image:functional-276405 from /tmp/build.535667749.tar
I1014 20:19:28.133515  576054 build_images.go:133] succeeded building to: functional-276405
I1014 20:19:28.133521  576054 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-276405
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image rm kicbase/echo-server:functional-276405 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-276405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-276405
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-276405
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-276405
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1014 20:21:44.392361  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m29.542977534s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (210.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 kubectl -- rollout status deployment/busybox: (3.460533801s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-n6bnr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-q27pj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-tkxmq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-n6bnr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-q27pj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-tkxmq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-n6bnr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-q27pj -- nslookup kubernetes.default.svc.cluster.local
E1014 20:23:07.467790  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-tkxmq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-n6bnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-n6bnr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-q27pj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-q27pj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-tkxmq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 kubectl -- exec busybox-7b57f96db7-tkxmq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node add --alsologtostderr -v 5
E1014 20:23:41.304576  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.311071  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.322884  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.344412  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.385936  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.467325  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.628732  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:41.950065  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:42.592243  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:43.873690  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:46.435101  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:51.556855  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:01.799207  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 node add --alsologtostderr -v 5: (56.775190406s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: (1.080583492s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-378189 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.088250221s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 status --output json --alsologtostderr -v 5: (1.092017925s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp testdata/cp-test.txt ha-378189:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189_ha-378189-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test_ha-378189_ha-378189-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189_ha-378189-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test_ha-378189_ha-378189-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189_ha-378189-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test_ha-378189_ha-378189-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp testdata/cp-test.txt ha-378189-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m02:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m02_ha-378189.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test_ha-378189-m02_ha-378189.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m02:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189-m02_ha-378189-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test_ha-378189-m02_ha-378189-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m02:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189-m02_ha-378189-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test_ha-378189-m02_ha-378189-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp testdata/cp-test.txt ha-378189-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m03_ha-378189.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m02.txt"
E1014 20:24:22.280661  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m03:/home/docker/cp-test.txt ha-378189-m04:/home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test_ha-378189-m03_ha-378189-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp testdata/cp-test.txt ha-378189-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3053045460/001/cp-test_ha-378189-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189:/home/docker/cp-test_ha-378189-m04_ha-378189.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189 "sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m02:/home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m02 "sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 cp ha-378189-m04:/home/docker/cp-test.txt ha-378189-m03:/home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 ssh -n ha-378189-m03 "sudo cat /home/docker/cp-test_ha-378189-m04_ha-378189-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 node stop m02 --alsologtostderr -v 5: (11.950494006s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: exit status 7 (842.801707ms)

                                                
                                                
-- stdout --
	ha-378189
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378189-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378189-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-378189-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:24:40.266967  590915 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:24:40.267187  590915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:24:40.267219  590915 out.go:374] Setting ErrFile to fd 2...
	I1014 20:24:40.267239  590915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:24:40.267529  590915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:24:40.267742  590915 out.go:368] Setting JSON to false
	I1014 20:24:40.267830  590915 mustload.go:65] Loading cluster: ha-378189
	I1014 20:24:40.267888  590915 notify.go:220] Checking for updates...
	I1014 20:24:40.268276  590915 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:24:40.268406  590915 status.go:174] checking status of ha-378189 ...
	I1014 20:24:40.269341  590915 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:24:40.291706  590915 status.go:371] ha-378189 host status = "Running" (err=<nil>)
	I1014 20:24:40.291729  590915 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:24:40.292069  590915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189
	I1014 20:24:40.311271  590915 host.go:66] Checking if "ha-378189" exists ...
	I1014 20:24:40.311590  590915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:24:40.311629  590915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189
	I1014 20:24:40.338960  590915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189/id_rsa Username:docker}
	I1014 20:24:40.450239  590915 ssh_runner.go:195] Run: systemctl --version
	I1014 20:24:40.457225  590915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:24:40.470718  590915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:24:40.543326  590915 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-14 20:24:40.531672303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:24:40.543982  590915 kubeconfig.go:125] found "ha-378189" server: "https://192.168.49.254:8443"
	I1014 20:24:40.544028  590915 api_server.go:166] Checking apiserver status ...
	I1014 20:24:40.544078  590915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:24:40.559103  590915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup
	I1014 20:24:40.568416  590915 api_server.go:182] apiserver freezer: "3:freezer:/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio/crio-bad26b86162cba9b75fc1d84a330d0e2e8d898d37f4bd30e17285c063fef9ac2"
	I1014 20:24:40.568531  590915 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbe33f2b58f8d2a39e6562d69c2d205b55fb014dd0388b7a0786eb126f6e45e2/crio/crio-bad26b86162cba9b75fc1d84a330d0e2e8d898d37f4bd30e17285c063fef9ac2/freezer.state
	I1014 20:24:40.576716  590915 api_server.go:204] freezer state: "THAWED"
	I1014 20:24:40.576744  590915 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 20:24:40.585397  590915 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 20:24:40.585431  590915 status.go:463] ha-378189 apiserver status = Running (err=<nil>)
	I1014 20:24:40.585455  590915 status.go:176] ha-378189 status: &{Name:ha-378189 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:24:40.585487  590915 status.go:174] checking status of ha-378189-m02 ...
	I1014 20:24:40.585820  590915 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:24:40.605058  590915 status.go:371] ha-378189-m02 host status = "Stopped" (err=<nil>)
	I1014 20:24:40.605080  590915 status.go:384] host is not running, skipping remaining checks
	I1014 20:24:40.605086  590915 status.go:176] ha-378189-m02 status: &{Name:ha-378189-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:24:40.605106  590915 status.go:174] checking status of ha-378189-m03 ...
	I1014 20:24:40.605437  590915 cli_runner.go:164] Run: docker container inspect ha-378189-m03 --format={{.State.Status}}
	I1014 20:24:40.632472  590915 status.go:371] ha-378189-m03 host status = "Running" (err=<nil>)
	I1014 20:24:40.632496  590915 host.go:66] Checking if "ha-378189-m03" exists ...
	I1014 20:24:40.632819  590915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m03
	I1014 20:24:40.670431  590915 host.go:66] Checking if "ha-378189-m03" exists ...
	I1014 20:24:40.670737  590915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:24:40.670787  590915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m03
	I1014 20:24:40.699845  590915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33532 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m03/id_rsa Username:docker}
	I1014 20:24:40.806914  590915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:24:40.820679  590915 kubeconfig.go:125] found "ha-378189" server: "https://192.168.49.254:8443"
	I1014 20:24:40.820705  590915 api_server.go:166] Checking apiserver status ...
	I1014 20:24:40.820750  590915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:24:40.831285  590915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	I1014 20:24:40.839465  590915 api_server.go:182] apiserver freezer: "3:freezer:/docker/61bf643cd6f25097f23a46db6b05766e3c80d212892bc3de1a25aae6627f6a53/crio/crio-8d74398bfb7201ad15db9b5fb4d9340791b22cc0fc615a7476fcbd50f6f2de0e"
	I1014 20:24:40.839549  590915 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/61bf643cd6f25097f23a46db6b05766e3c80d212892bc3de1a25aae6627f6a53/crio/crio-8d74398bfb7201ad15db9b5fb4d9340791b22cc0fc615a7476fcbd50f6f2de0e/freezer.state
	I1014 20:24:40.846894  590915 api_server.go:204] freezer state: "THAWED"
	I1014 20:24:40.846922  590915 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 20:24:40.855695  590915 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 20:24:40.855773  590915 status.go:463] ha-378189-m03 apiserver status = Running (err=<nil>)
	I1014 20:24:40.855791  590915 status.go:176] ha-378189-m03 status: &{Name:ha-378189-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:24:40.855810  590915 status.go:174] checking status of ha-378189-m04 ...
	I1014 20:24:40.856159  590915 cli_runner.go:164] Run: docker container inspect ha-378189-m04 --format={{.State.Status}}
	I1014 20:24:40.875757  590915 status.go:371] ha-378189-m04 host status = "Running" (err=<nil>)
	I1014 20:24:40.875783  590915 host.go:66] Checking if "ha-378189-m04" exists ...
	I1014 20:24:40.876070  590915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-378189-m04
	I1014 20:24:40.904634  590915 host.go:66] Checking if "ha-378189-m04" exists ...
	I1014 20:24:40.904948  590915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:24:40.904985  590915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-378189-m04
	I1014 20:24:40.924520  590915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/ha-378189-m04/id_rsa Username:docker}
	I1014 20:24:41.034000  590915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:24:41.050337  590915 status.go:176] ha-378189-m04 status: &{Name:ha-378189-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node start m02 --alsologtostderr -v 5
E1014 20:25:03.242882  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 node start m02 --alsologtostderr -v 5: (31.835278795s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: (1.194599073s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.244610485s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (23.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 stop --alsologtostderr -v 5: (23.81797759s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: exit status 7 (110.768075ms)

                                                
                                                
-- stdout --
	ha-378189
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378189-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-378189-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:34:44.092568  602038 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:34:44.092699  602038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:34:44.092709  602038 out.go:374] Setting ErrFile to fd 2...
	I1014 20:34:44.092714  602038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:34:44.092985  602038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:34:44.093174  602038 out.go:368] Setting JSON to false
	I1014 20:34:44.093221  602038 mustload.go:65] Loading cluster: ha-378189
	I1014 20:34:44.093316  602038 notify.go:220] Checking for updates...
	I1014 20:34:44.093623  602038 config.go:182] Loaded profile config "ha-378189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:34:44.093642  602038 status.go:174] checking status of ha-378189 ...
	I1014 20:34:44.094528  602038 cli_runner.go:164] Run: docker container inspect ha-378189 --format={{.State.Status}}
	I1014 20:34:44.113745  602038 status.go:371] ha-378189 host status = "Stopped" (err=<nil>)
	I1014 20:34:44.113770  602038 status.go:384] host is not running, skipping remaining checks
	I1014 20:34:44.113777  602038 status.go:176] ha-378189 status: &{Name:ha-378189 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:34:44.113807  602038 status.go:174] checking status of ha-378189-m02 ...
	I1014 20:34:44.114105  602038 cli_runner.go:164] Run: docker container inspect ha-378189-m02 --format={{.State.Status}}
	I1014 20:34:44.134697  602038 status.go:371] ha-378189-m02 host status = "Stopped" (err=<nil>)
	I1014 20:34:44.134724  602038 status.go:384] host is not running, skipping remaining checks
	I1014 20:34:44.134737  602038 status.go:176] ha-378189-m02 status: &{Name:ha-378189-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:34:44.134756  602038 status.go:174] checking status of ha-378189-m04 ...
	I1014 20:34:44.135042  602038 cli_runner.go:164] Run: docker container inspect ha-378189-m04 --format={{.State.Status}}
	I1014 20:34:44.154151  602038 status.go:371] ha-378189-m04 host status = "Stopped" (err=<nil>)
	I1014 20:34:44.154176  602038 status.go:384] host is not running, skipping remaining checks
	I1014 20:34:44.154183  602038 status.go:176] ha-378189-m04 status: &{Name:ha-378189-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (23.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m17.232705095s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 node add --control-plane --alsologtostderr -v 5
E1014 20:36:44.392521  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 node add --control-plane --alsologtostderr -v 5: (1m19.747572336s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-378189 status --alsologtostderr -v 5: (1.074630082s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.111046186s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-897764 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1014 20:38:41.309692  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-897764 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.200174987s)
--- PASS: TestJSONOutput/start/Command (81.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-897764 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-897764 --output=json --user=testUser: (5.69295458s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-459137 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-459137 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.69142ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c97cc9a-38b3-4a07-84c2-c69fec755484","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-459137] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"238d3c2d-2327-4e3a-babe-06f507bffcd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"a61c58f8-b610-4a73-9344-9d0625b6deac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"37332d5d-3460-4e85-bf76-fd4a09d8ad26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig"}}
	{"specversion":"1.0","id":"a17f930b-45e5-474b-83c0-2e687b583579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube"}}
	{"specversion":"1.0","id":"51f62d97-d4f6-4e87-8e41-410f0cc51881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f19bbfa3-3a11-4ee7-9b38-308808405766","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd93b2e0-e85f-4ca1-b911-e1ca5498355f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-459137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-459137
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (69.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-277036 --network=
E1014 20:39:47.469117  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:40:04.368465  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-277036 --network=: (1m7.649778145s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-277036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-277036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-277036: (2.135628719s)
--- PASS: TestKicCustomNetwork/create_custom_network (69.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-303038 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-303038 --network=bridge: (30.706309466s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-303038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-303038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-303038: (2.07880653s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.81s)

                                                
                                    
x
+
TestKicExistingNetwork (38s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1014 20:40:55.623309  546910 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1014 20:40:55.639261  546910 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1014 20:40:55.639339  546910 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1014 20:40:55.639356  546910 cli_runner.go:164] Run: docker network inspect existing-network
W1014 20:40:55.655560  546910 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1014 20:40:55.655594  546910 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1014 20:40:55.655608  546910 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1014 20:40:55.655711  546910 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 20:40:55.672115  546910 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3483fd8ccce IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:16:f0:2b:be:5b} reservation:<nil>}
I1014 20:40:55.672474  546910 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c3c0}
I1014 20:40:55.672495  546910 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1014 20:40:55.672545  546910 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1014 20:40:55.737126  546910 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-150884 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-150884 --network=existing-network: (35.819319915s)
helpers_test.go:175: Cleaning up "existing-network-150884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-150884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-150884: (2.037634691s)
I1014 20:41:33.610585  546910 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.00s)

                                                
                                    
x
+
TestKicCustomSubnet (39.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-830823 --subnet=192.168.60.0/24
E1014 20:41:44.392446  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-830823 --subnet=192.168.60.0/24: (37.603363382s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-830823 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-830823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-830823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-830823: (2.06264104s)
--- PASS: TestKicCustomSubnet (39.69s)

                                                
                                    
x
+
TestKicStaticIP (33.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-633213 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-633213 --static-ip=192.168.200.200: (31.026149023s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-633213 ip
helpers_test.go:175: Cleaning up "static-ip-633213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-633213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-633213: (2.109268144s)
--- PASS: TestKicStaticIP (33.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (80.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-341393 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-341393 --driver=docker  --container-runtime=crio: (37.328507593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-343871 --driver=docker  --container-runtime=crio
E1014 20:43:41.306477  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-343871 --driver=docker  --container-runtime=crio: (37.6084494s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-341393
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-343871
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-343871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-343871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-343871: (1.987502939s)
helpers_test.go:175: Cleaning up "first-341393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-341393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-341393: (1.982406729s)
--- PASS: TestMinikubeProfile (80.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-127970 --memory=3072 --mount-string /tmp/TestMountStartserial430941207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-127970 --memory=3072 --mount-string /tmp/TestMountStartserial430941207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.298145444s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-127970 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-130030 --memory=3072 --mount-string /tmp/TestMountStartserial430941207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-130030 --memory=3072 --mount-string /tmp/TestMountStartserial430941207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.689359488s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-130030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-127970 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-127970 --alsologtostderr -v=5: (1.615136315s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-130030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-130030
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-130030: (1.204671894s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-130030
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-130030: (7.039948064s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-130030 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-632158 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1014 20:46:44.392491  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-632158 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.023838744s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-632158 -- rollout status deployment/busybox: (2.982558543s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-hdt9c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-jjr7h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-hdt9c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-jjr7h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-hdt9c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-jjr7h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-hdt9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-hdt9c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-jjr7h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-632158 -- exec busybox-7b57f96db7-jjr7h -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-632158 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-632158 -v=5 --alsologtostderr: (57.678752028s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-632158 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp testdata/cp-test.txt multinode-632158:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457766722/001/cp-test_multinode-632158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158:/home/docker/cp-test.txt multinode-632158-m02:/home/docker/cp-test_multinode-632158_multinode-632158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test_multinode-632158_multinode-632158-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158:/home/docker/cp-test.txt multinode-632158-m03:/home/docker/cp-test_multinode-632158_multinode-632158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test_multinode-632158_multinode-632158-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp testdata/cp-test.txt multinode-632158-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457766722/001/cp-test_multinode-632158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m02:/home/docker/cp-test.txt multinode-632158:/home/docker/cp-test_multinode-632158-m02_multinode-632158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test_multinode-632158-m02_multinode-632158.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m02:/home/docker/cp-test.txt multinode-632158-m03:/home/docker/cp-test_multinode-632158-m02_multinode-632158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test_multinode-632158-m02_multinode-632158-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp testdata/cp-test.txt multinode-632158-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457766722/001/cp-test_multinode-632158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m03:/home/docker/cp-test.txt multinode-632158:/home/docker/cp-test_multinode-632158-m03_multinode-632158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158 "sudo cat /home/docker/cp-test_multinode-632158-m03_multinode-632158.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 cp multinode-632158-m03:/home/docker/cp-test.txt multinode-632158-m02:/home/docker/cp-test_multinode-632158-m03_multinode-632158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 ssh -n multinode-632158-m02 "sudo cat /home/docker/cp-test_multinode-632158-m03_multinode-632158-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-632158 node stop m03: (1.233571231s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-632158 status: exit status 7 (557.678751ms)

                                                
                                                
-- stdout --
	multinode-632158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-632158-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-632158-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr: exit status 7 (548.48484ms)

                                                
                                                
-- stdout --
	multinode-632158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-632158-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-632158-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:48:09.691405  652796 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:48:09.691593  652796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:48:09.691605  652796 out.go:374] Setting ErrFile to fd 2...
	I1014 20:48:09.691610  652796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:48:09.691857  652796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:48:09.692037  652796 out.go:368] Setting JSON to false
	I1014 20:48:09.692071  652796 mustload.go:65] Loading cluster: multinode-632158
	I1014 20:48:09.692181  652796 notify.go:220] Checking for updates...
	I1014 20:48:09.692485  652796 config.go:182] Loaded profile config "multinode-632158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:48:09.692509  652796 status.go:174] checking status of multinode-632158 ...
	I1014 20:48:09.693100  652796 cli_runner.go:164] Run: docker container inspect multinode-632158 --format={{.State.Status}}
	I1014 20:48:09.714126  652796 status.go:371] multinode-632158 host status = "Running" (err=<nil>)
	I1014 20:48:09.714151  652796 host.go:66] Checking if "multinode-632158" exists ...
	I1014 20:48:09.714449  652796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-632158
	I1014 20:48:09.744971  652796 host.go:66] Checking if "multinode-632158" exists ...
	I1014 20:48:09.745262  652796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:48:09.745307  652796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-632158
	I1014 20:48:09.762784  652796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33637 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/multinode-632158/id_rsa Username:docker}
	I1014 20:48:09.861956  652796 ssh_runner.go:195] Run: systemctl --version
	I1014 20:48:09.868340  652796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:48:09.881182  652796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:48:09.943177  652796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-14 20:48:09.932855785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 20:48:09.943894  652796 kubeconfig.go:125] found "multinode-632158" server: "https://192.168.67.2:8443"
	I1014 20:48:09.943942  652796 api_server.go:166] Checking apiserver status ...
	I1014 20:48:09.944007  652796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:48:09.955674  652796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1223/cgroup
	I1014 20:48:09.964697  652796 api_server.go:182] apiserver freezer: "3:freezer:/docker/6d883011d713786dc496624d1f1628062dc531ba46f0938c4d2dadde27916e1a/crio/crio-75bce73e78b80b1b1202e3f6f941f62aacf3f3ca5500eaa9fcb0b30c63fa35b6"
	I1014 20:48:09.964773  652796 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6d883011d713786dc496624d1f1628062dc531ba46f0938c4d2dadde27916e1a/crio/crio-75bce73e78b80b1b1202e3f6f941f62aacf3f3ca5500eaa9fcb0b30c63fa35b6/freezer.state
	I1014 20:48:09.972097  652796 api_server.go:204] freezer state: "THAWED"
	I1014 20:48:09.972124  652796 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1014 20:48:09.981979  652796 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1014 20:48:09.982005  652796 status.go:463] multinode-632158 apiserver status = Running (err=<nil>)
	I1014 20:48:09.982015  652796 status.go:176] multinode-632158 status: &{Name:multinode-632158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:48:09.982031  652796 status.go:174] checking status of multinode-632158-m02 ...
	I1014 20:48:09.982454  652796 cli_runner.go:164] Run: docker container inspect multinode-632158-m02 --format={{.State.Status}}
	I1014 20:48:10.005066  652796 status.go:371] multinode-632158-m02 host status = "Running" (err=<nil>)
	I1014 20:48:10.005093  652796 host.go:66] Checking if "multinode-632158-m02" exists ...
	I1014 20:48:10.005425  652796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-632158-m02
	I1014 20:48:10.025722  652796 host.go:66] Checking if "multinode-632158-m02" exists ...
	I1014 20:48:10.026035  652796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:48:10.026094  652796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-632158-m02
	I1014 20:48:10.051220  652796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33642 SSHKeyPath:/home/jenkins/minikube-integration/21409-545047/.minikube/machines/multinode-632158-m02/id_rsa Username:docker}
	I1014 20:48:10.153534  652796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:48:10.166394  652796 status.go:176] multinode-632158-m02 status: &{Name:multinode-632158-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:48:10.166500  652796 status.go:174] checking status of multinode-632158-m03 ...
	I1014 20:48:10.166897  652796 cli_runner.go:164] Run: docker container inspect multinode-632158-m03 --format={{.State.Status}}
	I1014 20:48:10.184539  652796 status.go:371] multinode-632158-m03 host status = "Stopped" (err=<nil>)
	I1014 20:48:10.184565  652796 status.go:384] host is not running, skipping remaining checks
	I1014 20:48:10.184572  652796 status.go:176] multinode-632158-m03 status: &{Name:multinode-632158-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-632158 node start m03 -v=5 --alsologtostderr: (7.446980808s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-632158
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-632158
E1014 20:48:41.309511  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-632158: (24.71939511s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-632158 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-632158 --wait=true -v=5 --alsologtostderr: (51.29775211s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-632158
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-632158 node delete m03: (4.870974594s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-632158 stop: (23.625566126s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-632158 status: exit status 7 (92.120243ms)

                                                
                                                
-- stdout --
	multinode-632158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-632158-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr: exit status 7 (103.004306ms)

                                                
                                                
-- stdout --
	multinode-632158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-632158-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:50:03.918353  660538 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:50:03.918547  660538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:50:03.918560  660538 out.go:374] Setting ErrFile to fd 2...
	I1014 20:50:03.918565  660538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:50:03.918829  660538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 20:50:03.919014  660538 out.go:368] Setting JSON to false
	I1014 20:50:03.919047  660538 mustload.go:65] Loading cluster: multinode-632158
	I1014 20:50:03.919150  660538 notify.go:220] Checking for updates...
	I1014 20:50:03.919450  660538 config.go:182] Loaded profile config "multinode-632158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:50:03.919467  660538 status.go:174] checking status of multinode-632158 ...
	I1014 20:50:03.920335  660538 cli_runner.go:164] Run: docker container inspect multinode-632158 --format={{.State.Status}}
	I1014 20:50:03.939562  660538 status.go:371] multinode-632158 host status = "Stopped" (err=<nil>)
	I1014 20:50:03.939584  660538 status.go:384] host is not running, skipping remaining checks
	I1014 20:50:03.939591  660538 status.go:176] multinode-632158 status: &{Name:multinode-632158 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 20:50:03.939629  660538 status.go:174] checking status of multinode-632158-m02 ...
	I1014 20:50:03.939936  660538 cli_runner.go:164] Run: docker container inspect multinode-632158-m02 --format={{.State.Status}}
	I1014 20:50:03.973203  660538 status.go:371] multinode-632158-m02 host status = "Stopped" (err=<nil>)
	I1014 20:50:03.973224  660538 status.go:384] host is not running, skipping remaining checks
	I1014 20:50:03.973230  660538 status.go:176] multinode-632158-m02 status: &{Name:multinode-632158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-632158 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-632158 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (56.423882592s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-632158 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-632158
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-632158-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-632158-m02 --driver=docker  --container-runtime=crio: exit status 14 (89.817703ms)

                                                
                                                
-- stdout --
	* [multinode-632158-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-632158-m02' is duplicated with machine name 'multinode-632158-m02' in profile 'multinode-632158'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-632158-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-632158-m03 --driver=docker  --container-runtime=crio: (35.330670136s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-632158
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-632158: exit status 80 (365.744282ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-632158 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-632158-m03 already exists in multinode-632158-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-632158-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-632158-m03: (1.942650628s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                    
x
+
TestPreload (162.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901228 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1014 20:51:44.394577  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901228 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.670359401s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901228 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-901228 image pull gcr.io/k8s-minikube/busybox: (2.188235913s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-901228
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-901228: (5.765761814s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901228 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1014 20:53:41.304623  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901228 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m28.943736778s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901228 image list
helpers_test.go:175: Cleaning up "test-preload-901228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-901228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-901228: (2.342626023s)
--- PASS: TestPreload (162.16s)

                                                
                                    
x
+
TestScheduledStopUnix (103.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-070115 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-070115 --memory=3072 --driver=docker  --container-runtime=crio: (28.326662693s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-070115 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-070115 -n scheduled-stop-070115
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-070115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1014 20:54:53.956205  546910 retry.go:31] will retry after 51.28µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.956464  546910 retry.go:31] will retry after 110.837µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.957603  546910 retry.go:31] will retry after 227.716µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.960663  546910 retry.go:31] will retry after 464.222µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.963324  546910 retry.go:31] will retry after 574µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.964475  546910 retry.go:31] will retry after 447.748µs: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.965714  546910 retry.go:31] will retry after 1.236248ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.967875  546910 retry.go:31] will retry after 2.266913ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.971052  546910 retry.go:31] will retry after 2.735062ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.974252  546910 retry.go:31] will retry after 3.635048ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.978510  546910 retry.go:31] will retry after 6.437014ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.985785  546910 retry.go:31] will retry after 5.992102ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:53.992015  546910 retry.go:31] will retry after 7.905555ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:54.000275  546910 retry.go:31] will retry after 25.793966ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:54.026538  546910 retry.go:31] will retry after 18.802771ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
I1014 20:54:54.045780  546910 retry.go:31] will retry after 59.302391ms: open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/scheduled-stop-070115/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-070115 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-070115 -n scheduled-stop-070115
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-070115
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-070115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-070115
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-070115: exit status 7 (75.428367ms)

                                                
                                                
-- stdout --
	scheduled-stop-070115
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-070115 -n scheduled-stop-070115
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-070115 -n scheduled-stop-070115: exit status 7 (72.144336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-070115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-070115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-070115: (4.04040698s)
--- PASS: TestScheduledStopUnix (103.98s)

                                                
                                    
x
+
TestInsufficientStorage (13.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-377978 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-377978 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.06240356s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b27218e0-19b0-4b20-92a6-940b2f0a3a66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-377978] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d95ca130-b878-4c82-99f2-58f95ccb8e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"3f6a2883-87bf-421b-9719-0a599a046605","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12d88e62-fff0-4753-9385-f21ec802a98f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig"}}
	{"specversion":"1.0","id":"607c14d0-e88e-463b-a21d-a5c749140a17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube"}}
	{"specversion":"1.0","id":"b214655c-7ad4-4737-a5b0-22f6be744a85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ff08f60f-84e1-4a1c-a486-64a0344edfc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f9b42b21-bfdb-498c-9e97-73bd935cf305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f58b3644-32ec-41d2-a692-f2f9c34467a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ce38ebc0-333d-4b45-b8e4-24e9486236a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef4f01ca-327c-4df3-ba65-488d6e5baa71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d7fbb7b7-f9b6-4074-8bee-49f9eb54b0ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-377978\" primary control-plane node in \"insufficient-storage-377978\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c29d0335-c833-4bc5-b735-5311dfb7ce2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f0bb5b6-5cdd-4622-8e70-ce1e23b2001c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e512408-3807-4be8-bf79-974c55ff2ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-377978 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-377978 --output=json --layout=cluster: exit status 7 (297.84621ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-377978","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-377978","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:56:20.445871  676695 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-377978" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-377978 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-377978 --output=json --layout=cluster: exit status 7 (310.99735ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-377978","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-377978","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:56:20.756424  676761 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-377978" does not appear in /home/jenkins/minikube-integration/21409-545047/kubeconfig
	E1014 20:56:20.766228  676761 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/insufficient-storage-377978/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-377978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-377978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-377978: (1.904684398s)
--- PASS: TestInsufficientStorage (13.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1763870620 start -p running-upgrade-274014 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1763870620 start -p running-upgrade-274014 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.780749893s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-274014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-274014 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.283256612s)
helpers_test.go:175: Cleaning up "running-upgrade-274014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-274014
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-274014: (2.127114612s)
--- PASS: TestRunningBinaryUpgrade (57.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1014 20:58:41.304232  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.970238985s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-791601
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-791601: (1.328123397s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-791601 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-791601 status --format={{.Host}}: exit status 7 (69.354412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.804103222s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-791601 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (120.81448ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-791601] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-791601
	    minikube start -p kubernetes-upgrade-791601 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7916012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-791601 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1014 21:03:41.305162  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-791601 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.532476813s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-791601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-791601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-791601: (1.974321595s)
--- PASS: TestKubernetesUpgrade (349.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2769261086 start -p missing-upgrade-558395 --memory=3072 --driver=docker  --container-runtime=crio
E1014 20:56:44.370556  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:56:44.393571  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2769261086 start -p missing-upgrade-558395 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.827065661s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-558395
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-558395
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-558395 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-558395 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.343621038s)
helpers_test.go:175: Cleaning up "missing-upgrade-558395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-558395
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-558395: (1.983157007s)
--- PASS: TestMissingContainerUpgrade (120.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.850157ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-925071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925071 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1014 20:56:27.470479  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925071 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.072351165s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-925071 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (114.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m52.319875468s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-925071 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-925071 status -o json: exit status 2 (366.757276ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-925071","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-925071
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-925071: (2.196334567s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (114.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925071 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.50956008s)
--- PASS: TestNoKubernetes/serial/Start (9.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-925071 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-925071 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.378624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-925071
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-925071: (1.414786072s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925071 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925071 --driver=docker  --container-runtime=crio: (8.244897552s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-925071 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-925071 "sudo systemctl is-active --quiet service kubelet": exit status 1 (448.566129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3723674132 start -p stopped-upgrade-755343 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3723674132 start -p stopped-upgrade-755343 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.795957857s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3723674132 -p stopped-upgrade-755343 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3723674132 -p stopped-upgrade-755343 stop: (1.2396246s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-755343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-755343 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.381852728s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-755343
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-755343: (1.300052422s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (79.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-361043 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1014 21:01:44.392690  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-361043 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.006529703s)
--- PASS: TestPause/serial/Start (79.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-361043 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-361043 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.574960598s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-810782 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-810782 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.204673ms)

                                                
                                                
-- stdout --
	* [false-810782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 21:04:16.388367  714452 out.go:360] Setting OutFile to fd 1 ...
	I1014 21:04:16.388552  714452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:16.388578  714452 out.go:374] Setting ErrFile to fd 2...
	I1014 21:04:16.388599  714452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 21:04:16.388897  714452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-545047/.minikube/bin
	I1014 21:04:16.389361  714452 out.go:368] Setting JSON to false
	I1014 21:04:16.390266  714452 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13607,"bootTime":1760462250,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 21:04:16.390389  714452 start.go:141] virtualization:  
	I1014 21:04:16.395807  714452 out.go:179] * [false-810782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1014 21:04:16.398832  714452 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 21:04:16.398961  714452 notify.go:220] Checking for updates...
	I1014 21:04:16.404647  714452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 21:04:16.407430  714452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-545047/kubeconfig
	I1014 21:04:16.410222  714452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-545047/.minikube
	I1014 21:04:16.412995  714452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 21:04:16.415971  714452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 21:04:16.419588  714452 config.go:182] Loaded profile config "force-systemd-flag-162846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 21:04:16.419762  714452 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 21:04:16.448137  714452 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1014 21:04:16.448265  714452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 21:04:16.506408  714452 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-14 21:04:16.495730871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1014 21:04:16.506513  714452 docker.go:318] overlay module found
	I1014 21:04:16.509545  714452 out.go:179] * Using the docker driver based on user configuration
	I1014 21:04:16.512395  714452 start.go:305] selected driver: docker
	I1014 21:04:16.512417  714452 start.go:925] validating driver "docker" against <nil>
	I1014 21:04:16.512431  714452 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 21:04:16.516099  714452 out.go:203] 
	W1014 21:04:16.519024  714452 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1014 21:04:16.521946  714452 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-810782 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-810782

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810782"

                                                
                                                
----------------------- debugLogs end: false-810782 [took: 3.327978437s] --------------------------------
helpers_test.go:175: Cleaning up "false-810782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-810782
--- PASS: TestNetworkPlugins/group/false (3.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1014 21:13:41.304438  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.352439046s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-624341 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ef3fe4a0-c583-4dca-a1ca-5d7dfd71d192] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003319548s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-624341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-624341 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-624341 --alsologtostderr -v=3: (11.889728532s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341: exit status 7 (78.605586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-624341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-624341 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.109988309s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624341 -n old-k8s-version-624341
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w9nz4" [0a5d607f-d276-4275-904a-eaad8181d089] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003815717s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.977952006s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w9nz4" [0a5d607f-d276-4275-904a-eaad8181d089] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004363096s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-624341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-624341 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1014 21:16:44.393037  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.090008604s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6b1224d1-c597-4749-b584-15d6c3da4e51] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6b1224d1-c597-4749-b584-15d6c3da4e51] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003778715s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-319221 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-319221 --alsologtostderr -v=3: (11.956268573s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-650951 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [92172836-5a93-4799-bd2f-e485cdace7f5] Pending
helpers_test.go:352: "busybox" [92172836-5a93-4799-bd2f-e485cdace7f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [92172836-5a93-4799-bd2f-e485cdace7f5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004042123s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-650951 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221: exit status 7 (73.706775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-319221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-319221 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.195207761s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-319221 -n default-k8s-diff-port-319221
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-650951 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-650951 --alsologtostderr -v=3: (12.326711142s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951: exit status 7 (85.587993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-650951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-650951 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.514634228s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-650951 -n embed-certs-650951
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvztt" [0f154480-9f18-4886-83d3-fde2d03ff6f4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003479536s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvztt" [0f154480-9f18-4886-83d3-fde2d03ff6f4] Running
E1014 21:18:41.305157  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003793898s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-319221 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-319221 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7m6nl" [51a39b76-fee4-46e1-8974-42d12879a05d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003469297s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.00576887s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7m6nl" [51a39b76-fee4-46e1-8974-42d12879a05d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002855596s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-650951 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-650951 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1014 21:19:32.612271  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.618540  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.629854  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.651185  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.692568  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.773911  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:32.935245  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:33.256802  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:33.898637  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:35.179923  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:37.742292  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:42.863831  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:19:53.105523  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.098262184s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-718270 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-718270 --alsologtostderr -v=3: (1.267281429s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270: exit status 7 (91.03032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-718270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-970287 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9b46c18b-db2a-4ed6-9a6a-03b201869a09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9b46c18b-db2a-4ed6-9a6a-03b201869a09] Running
E1014 21:20:13.587138  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004382012s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-970287 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718270 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (17.382912823s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718270 -n newest-cni-718270
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-970287 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-970287 --alsologtostderr -v=3: (12.229910834s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718270 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287: exit status 7 (110.334529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-970287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-970287 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.129922199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970287 -n no-preload-970287
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1014 21:20:54.549025  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m23.789375352s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f4mgk" [545dbf55-bce4-4aed-a463-550b5136ae40] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002654577s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f4mgk" [545dbf55-bce4-4aed-a463-550b5136ae40] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004419677s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-970287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970287 image list --format=json
E1014 21:21:44.393069  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m30.105308487s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-810782 "pgrep -a kubelet"
I1014 21:21:58.304967  546910 config.go:182] Loaded profile config "auto-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9c4zn" [36064091-1760-47e0-b57a-a16870493c4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9c4zn" [36064091-1760-47e0-b57a-a16870493c4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003528936s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1014 21:22:54.597529  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.83408672s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jmt86" [6e228348-827b-47a4-8938-5ae7fc2eb565] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003785474s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-810782 "pgrep -a kubelet"
I1014 21:23:30.353600  546910 config.go:182] Loaded profile config "kindnet-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mvps4" [d8d4a9b7-c90d-43f1-80bb-6d026480f6e0] Pending
helpers_test.go:352: "netcat-cd4db9dbf-mvps4" [d8d4a9b7-c90d-43f1-80bb-6d026480f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:23:35.559733  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mvps4" [d8d4a9b7-c90d-43f1-80bb-6d026480f6e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005435841s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-8ms96" [c3f40400-e89b-4193-9c65-aeabf4a19286] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005805726s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-810782 "pgrep -a kubelet"
I1014 21:23:38.383586  546910 config.go:182] Loaded profile config "calico-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vt5bh" [09e9ef3b-1d92-4393-aecb-dc6a9cf404d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:23:41.304488  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/functional-276405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vt5bh" [09e9ef3b-1d92-4393-aecb-dc6a9cf404d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004447323s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.130952977s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1014 21:24:32.611669  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:24:57.481513  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:00.313287  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/old-k8s-version-624341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.166841  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.173306  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.184751  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.206298  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.248216  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.329692  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.491226  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:07.812534  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:08.454257  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (51.839336249s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-810782 "pgrep -a kubelet"
I1014 21:25:09.013274  546910 config.go:182] Loaded profile config "enable-default-cni-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-299fr" [765ce7c7-a739-4d0a-b07a-ac25267bac25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:25:09.736171  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:25:12.298393  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-299fr" [765ce7c7-a739-4d0a-b07a-ac25267bac25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003975322s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-810782 "pgrep -a kubelet"
I1014 21:25:17.015175  546910 config.go:182] Loaded profile config "custom-flannel-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ldpqw" [993c6ade-b779-4387-a719-8ce56f8918b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:25:17.419770  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ldpqw" [993c6ade-b779-4387-a719-8ce56f8918b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005632928s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1014 21:25:27.661455  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1014 21:25:48.143246  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.865286418s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1014 21:26:29.104942  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/no-preload-970287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:44.392380  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/addons-864166/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-810782 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.513510031s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wdxfh" [7393b8e1-b22e-4628-a1cd-ea6a031ecc80] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003511995s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-810782 "pgrep -a kubelet"
I1014 21:26:55.025999  546910 config.go:182] Loaded profile config "flannel-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m7wzd" [8c1b5763-a5bf-47e4-906c-a3374bc66c6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:26:58.572853  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.579344  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.590692  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.612095  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.653645  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.735492  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:58.897352  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:26:59.218739  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m7wzd" [8c1b5763-a5bf-47e4-906c-a3374bc66c6e] Running
E1014 21:26:59.860862  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:27:01.143174  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:27:03.705362  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003157705s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-810782 "pgrep -a kubelet"
I1014 21:27:07.140068  546910 config.go:182] Loaded profile config "bridge-810782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-810782 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nj8v8" [968d7bea-994b-4399-bea3-78a17a0d2545] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 21:27:08.827488  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/auto-810782/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nj8v8" [968d7bea-994b-4399-bea3-78a17a0d2545] Running
E1014 21:27:13.621237  546910 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-545047/.minikube/profiles/default-k8s-diff-port-319221/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003402974s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-810782 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-810782 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-282779 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-282779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-282779
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-457849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-457849
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-810782 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-810782

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810782"

                                                
                                                
----------------------- debugLogs end: kubenet-810782 [took: 3.296794531s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-810782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-810782
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-810782 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-810782" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-810782

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-810782" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810782"

                                                
                                                
----------------------- debugLogs end: cilium-810782 [took: 3.702613764s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-810782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-810782
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard